00:00:00.002 Started by upstream project "autotest-per-patch" build number 126159 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "jbp-per-patch" build number 23859 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.063 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.064 The recommended git tool is: git 00:00:00.064 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.106 Fetching changes from the remote Git repository 00:00:00.108 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.172 Using shallow fetch with depth 1 00:00:00.172 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.172 > git --version # timeout=10 00:00:00.218 > git --version # 'git version 2.39.2' 00:00:00.218 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.245 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.245 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/75/21875/22 # timeout=5 00:00:05.339 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.352 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.365 Checking out Revision 8c6732c9e0fe7c9c74cd1fb560a619e554726af3 (FETCH_HEAD) 00:00:05.365 > git config core.sparsecheckout # timeout=10 00:00:05.378 > git read-tree -mu HEAD # timeout=10 00:00:05.395 > git checkout -f 8c6732c9e0fe7c9c74cd1fb560a619e554726af3 # timeout=5 00:00:05.420 Commit message: "jenkins/jjb-config: Remove SPDK_TEST_RELEASE_BUILD from packaging job" 00:00:05.420 > git rev-list --no-walk b0ebb039b16703d64cc7534b6e0fa0780ed1e683 # timeout=10 00:00:05.516 [Pipeline] Start of Pipeline 00:00:05.530 [Pipeline] library 00:00:05.531 Loading library shm_lib@master 00:00:05.531 Library shm_lib@master is cached. Copying from home. 00:00:05.549 [Pipeline] node 00:00:05.560 Running on CYP12 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.561 [Pipeline] { 00:00:05.572 [Pipeline] catchError 00:00:05.573 [Pipeline] { 00:00:05.586 [Pipeline] wrap 00:00:05.597 [Pipeline] { 00:00:05.606 [Pipeline] stage 00:00:05.608 [Pipeline] { (Prologue) 00:00:05.895 [Pipeline] sh 00:00:06.183 + logger -p user.info -t JENKINS-CI 00:00:06.203 [Pipeline] echo 00:00:06.205 Node: CYP12 00:00:06.212 [Pipeline] sh 00:00:06.515 [Pipeline] setCustomBuildProperty 00:00:06.525 [Pipeline] echo 00:00:06.527 Cleanup processes 00:00:06.533 [Pipeline] sh 00:00:06.821 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.821 2595277 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.837 [Pipeline] sh 00:00:07.124 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.124 ++ grep -v 'sudo pgrep' 00:00:07.124 ++ awk '{print $1}' 00:00:07.124 + sudo kill -9 00:00:07.124 + true 00:00:07.142 [Pipeline] cleanWs 00:00:07.153 [WS-CLEANUP] Deleting project workspace... 00:00:07.154 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.161 [WS-CLEANUP] done 00:00:07.166 [Pipeline] setCustomBuildProperty 00:00:07.182 [Pipeline] sh 00:00:07.467 + sudo git config --global --replace-all safe.directory '*' 00:00:07.589 [Pipeline] httpRequest 00:00:07.614 [Pipeline] echo 00:00:07.616 Sorcerer 10.211.164.101 is alive 00:00:07.625 [Pipeline] httpRequest 00:00:07.630 HttpMethod: GET 00:00:07.631 URL: http://10.211.164.101/packages/jbp_8c6732c9e0fe7c9c74cd1fb560a619e554726af3.tar.gz 00:00:07.631 Sending request to url: http://10.211.164.101/packages/jbp_8c6732c9e0fe7c9c74cd1fb560a619e554726af3.tar.gz 00:00:07.657 Response Code: HTTP/1.1 200 OK 00:00:07.657 Success: Status code 200 is in the accepted range: 200,404 00:00:07.658 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_8c6732c9e0fe7c9c74cd1fb560a619e554726af3.tar.gz 00:00:30.982 [Pipeline] sh 00:00:31.269 + tar --no-same-owner -xf jbp_8c6732c9e0fe7c9c74cd1fb560a619e554726af3.tar.gz 00:00:31.288 [Pipeline] httpRequest 00:00:31.309 [Pipeline] echo 00:00:31.311 Sorcerer 10.211.164.101 is alive 00:00:31.321 [Pipeline] httpRequest 00:00:31.326 HttpMethod: GET 00:00:31.327 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:31.328 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:31.336 Response Code: HTTP/1.1 200 OK 00:00:31.337 Success: Status code 200 is in the accepted range: 200,404 00:00:31.338 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:51.731 [Pipeline] sh 00:01:52.016 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:54.567 [Pipeline] sh 00:01:54.851 + git -C spdk log --oneline -n5 00:01:54.851 719d03c6a sock/uring: only register net impl if supported 00:01:54.851 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:54.851 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:54.851 6c7c1f57e accel: add sequence outstanding stat 00:01:54.851 3bc8e6a26 accel: add utility to put task 00:01:54.865 [Pipeline] } 00:01:54.877 [Pipeline] // stage 00:01:54.884 [Pipeline] stage 00:01:54.886 [Pipeline] { (Prepare) 00:01:54.902 [Pipeline] writeFile 00:01:54.920 [Pipeline] sh 00:01:55.205 + logger -p user.info -t JENKINS-CI 00:01:55.220 [Pipeline] sh 00:01:55.505 + logger -p user.info -t JENKINS-CI 00:01:55.529 [Pipeline] sh 00:01:55.819 + cat autorun-spdk.conf 00:01:55.819 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.819 SPDK_TEST_NVMF=1 00:01:55.819 SPDK_TEST_NVME_CLI=1 00:01:55.819 SPDK_TEST_NVMF_NICS=mlx5 00:01:55.819 SPDK_RUN_UBSAN=1 00:01:55.819 NET_TYPE=phy 00:01:55.828 RUN_NIGHTLY=0 00:01:55.833 [Pipeline] readFile 00:01:55.863 [Pipeline] withEnv 00:01:55.865 [Pipeline] { 00:01:55.881 [Pipeline] sh 00:01:56.175 + set -ex 00:01:56.175 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:56.175 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:56.175 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.175 ++ SPDK_TEST_NVMF=1 00:01:56.175 ++ SPDK_TEST_NVME_CLI=1 00:01:56.175 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:56.175 ++ SPDK_RUN_UBSAN=1 00:01:56.175 ++ NET_TYPE=phy 00:01:56.175 ++ RUN_NIGHTLY=0 00:01:56.175 + case $SPDK_TEST_NVMF_NICS in 00:01:56.175 + DRIVERS=mlx5_ib 00:01:56.175 + [[ -n mlx5_ib ]] 00:01:56.175 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:56.175 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:06.205 rmmod: ERROR: Module irdma is not currently loaded 00:02:06.205 rmmod: ERROR: Module i40iw is not currently loaded 00:02:06.205 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:06.205 + true 00:02:06.205 + for D in $DRIVERS 00:02:06.205 + sudo modprobe mlx5_ib 00:02:06.205 + exit 0 00:02:06.215 [Pipeline] } 00:02:06.230 [Pipeline] // withEnv 00:02:06.234 [Pipeline] } 00:02:06.251 [Pipeline] // stage 00:02:06.259 [Pipeline] catchError 00:02:06.261 [Pipeline] { 00:02:06.274 [Pipeline] timeout 00:02:06.274 Timeout set to expire in 1 hr 0 min 00:02:06.276 [Pipeline] { 00:02:06.290 [Pipeline] stage 00:02:06.292 [Pipeline] { (Tests) 00:02:06.305 [Pipeline] sh 00:02:06.592 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:02:06.592 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:02:06.592 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:02:06.592 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:02:06.592 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:06.592 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:02:06.592 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:02:06.592 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:06.592 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:02:06.592 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:06.592 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:02:06.592 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:02:06.592 + source /etc/os-release 00:02:06.592 ++ NAME='Fedora Linux' 00:02:06.592 ++ VERSION='38 (Cloud Edition)' 00:02:06.592 ++ ID=fedora 00:02:06.592 ++ VERSION_ID=38 00:02:06.592 ++ VERSION_CODENAME= 00:02:06.592 ++ PLATFORM_ID=platform:f38 00:02:06.592 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:06.592 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:06.592 ++ LOGO=fedora-logo-icon 00:02:06.592 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:06.592 ++ HOME_URL=https://fedoraproject.org/ 00:02:06.592 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:06.592 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:06.592 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:06.592 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:06.592 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:06.592 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:06.592 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:06.592 ++ SUPPORT_END=2024-05-14 00:02:06.592 ++ VARIANT='Cloud Edition' 00:02:06.592 ++ VARIANT_ID=cloud 00:02:06.592 + uname -a 00:02:06.592 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:06.592 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:09.892 Hugepages 00:02:09.893 node hugesize free / total 00:02:09.893 node0 1048576kB 0 / 0 00:02:09.893 node0 2048kB 0 / 0 00:02:09.893 node1 1048576kB 0 / 0 00:02:09.893 node1 2048kB 0 / 0 00:02:09.893 00:02:09.893 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.893 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:09.893 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:09.893 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:09.893 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:09.893 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:09.893 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:09.893 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:09.893 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:09.893 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:09.893 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:09.893 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:09.893 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:09.893 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:09.893 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:09.893 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:09.893 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:09.893 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:09.893 + rm -f /tmp/spdk-ld-path 00:02:10.154 + source autorun-spdk.conf 00:02:10.154 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.154 ++ SPDK_TEST_NVMF=1 00:02:10.154 ++ SPDK_TEST_NVME_CLI=1 00:02:10.154 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:10.154 ++ SPDK_RUN_UBSAN=1 00:02:10.154 ++ NET_TYPE=phy 00:02:10.154 ++ RUN_NIGHTLY=0 00:02:10.154 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:10.154 + [[ -n '' ]] 00:02:10.154 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:10.154 + for M in /var/spdk/build-*-manifest.txt 00:02:10.154 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:10.154 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:10.154 + for M in /var/spdk/build-*-manifest.txt 00:02:10.154 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:10.154 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:10.154 ++ uname 00:02:10.154 + [[ Linux == \L\i\n\u\x ]] 00:02:10.154 + sudo dmesg -T 00:02:10.154 + sudo dmesg --clear 00:02:10.154 + dmesg_pid=2596953 00:02:10.154 + [[ Fedora Linux == FreeBSD ]] 00:02:10.154 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.154 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:10.154 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:10.154 + [[ -x /usr/src/fio-static/fio ]] 00:02:10.154 + export FIO_BIN=/usr/src/fio-static/fio 00:02:10.154 + FIO_BIN=/usr/src/fio-static/fio 00:02:10.154 + sudo dmesg -Tw 00:02:10.154 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:10.154 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:10.154 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:10.154 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.154 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:10.154 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:10.154 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.154 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:10.154 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:10.154 Test configuration: 00:02:10.154 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.154 SPDK_TEST_NVMF=1 00:02:10.154 SPDK_TEST_NVME_CLI=1 00:02:10.154 SPDK_TEST_NVMF_NICS=mlx5 00:02:10.154 SPDK_RUN_UBSAN=1 00:02:10.154 NET_TYPE=phy 00:02:10.154 RUN_NIGHTLY=0 10:09:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:10.154 10:09:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:10.154 10:09:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:10.154 10:09:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:10.154 10:09:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.154 10:09:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.154 10:09:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.154 10:09:47 -- paths/export.sh@5 -- $ export PATH 00:02:10.154 10:09:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:10.154 10:09:47 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:10.154 10:09:47 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:10.154 10:09:47 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721030987.XXXXXX 00:02:10.154 10:09:47 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721030987.W6WDHh 00:02:10.154 10:09:47 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:10.154 10:09:47 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:10.154 10:09:47 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:02:10.154 10:09:47 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:10.154 10:09:47 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:10.154 10:09:47 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:10.154 10:09:47 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:10.154 10:09:47 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.154 10:09:47 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:02:10.154 10:09:47 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:10.154 10:09:47 -- pm/common@17 -- $ local monitor 00:02:10.154 10:09:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.154 10:09:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.154 10:09:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.154 10:09:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:10.154 10:09:47 -- pm/common@21 -- $ date +%s 00:02:10.154 10:09:47 -- pm/common@25 -- $ sleep 1 00:02:10.154 10:09:47 -- pm/common@21 -- $ date +%s 00:02:10.154 10:09:47 -- pm/common@21 -- $ date +%s 00:02:10.154 10:09:47 -- pm/common@21 -- $ date +%s 00:02:10.154 10:09:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721030987 00:02:10.154 10:09:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721030987 00:02:10.154 10:09:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721030987 00:02:10.154 10:09:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721030987 00:02:10.416 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721030987_collect-vmstat.pm.log 00:02:10.416 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721030987_collect-cpu-load.pm.log 00:02:10.416 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721030987_collect-cpu-temp.pm.log 00:02:10.416 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721030987_collect-bmc-pm.bmc.pm.log 00:02:11.359 10:09:48 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:11.359 10:09:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:11.359 10:09:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:11.359 10:09:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:11.359 10:09:48 -- spdk/autobuild.sh@16 -- $ date -u 00:02:11.359 Mon Jul 15 08:09:48 AM UTC 2024 00:02:11.359 10:09:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:11.359 v24.09-pre-202-g719d03c6a 00:02:11.359 10:09:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:11.359 10:09:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:11.359 10:09:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:11.359 10:09:48 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:11.359 10:09:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:11.359 10:09:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.359 ************************************ 00:02:11.359 START TEST ubsan 00:02:11.359 ************************************ 00:02:11.359 10:09:48 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:11.359 using ubsan 00:02:11.359 00:02:11.359 real 0m0.000s 00:02:11.359 user 0m0.000s 00:02:11.359 sys 0m0.000s 00:02:11.359 10:09:48 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:11.359 10:09:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:11.359 ************************************ 00:02:11.359 END TEST ubsan 00:02:11.359 ************************************ 00:02:11.359 10:09:48 -- common/autotest_common.sh@1142 -- $ return 0 00:02:11.359 10:09:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:11.359 10:09:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:11.359 10:09:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:11.359 10:09:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:11.359 10:09:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:11.359 10:09:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:11.359 10:09:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:11.359 10:09:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:11.359 10:09:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:02:11.359 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:11.359 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:11.943 Using 'verbs' RDMA provider 00:02:27.795 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:40.026 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:40.026 Creating mk/config.mk...done. 00:02:40.026 Creating mk/cc.flags.mk...done. 00:02:40.026 Type 'make' to build. 00:02:40.026 10:10:16 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:40.026 10:10:16 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:40.026 10:10:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:40.026 10:10:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.026 ************************************ 00:02:40.026 START TEST make 00:02:40.026 ************************************ 00:02:40.026 10:10:16 make -- common/autotest_common.sh@1123 -- $ make -j144 00:02:40.026 make[1]: Nothing to be done for 'all'. 00:02:48.149 The Meson build system 00:02:48.149 Version: 1.3.1 00:02:48.149 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:48.149 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:48.149 Build type: native build 00:02:48.149 Program cat found: YES (/usr/bin/cat) 00:02:48.149 Project name: DPDK 00:02:48.149 Project version: 24.03.0 00:02:48.149 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:48.149 C linker for the host machine: cc ld.bfd 2.39-16 00:02:48.149 Host machine cpu family: x86_64 00:02:48.149 Host machine cpu: x86_64 00:02:48.149 Message: ## Building in Developer Mode ## 00:02:48.149 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:48.149 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:48.149 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:48.149 Program python3 found: YES (/usr/bin/python3) 00:02:48.149 Program cat found: YES (/usr/bin/cat) 00:02:48.149 Compiler for C supports arguments -march=native: YES 00:02:48.149 Checking for size of "void *" : 8 00:02:48.149 Checking for size of "void *" : 8 (cached) 00:02:48.149 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:48.149 Library m found: YES 00:02:48.149 Library numa found: YES 00:02:48.149 Has header "numaif.h" : YES 00:02:48.149 Library fdt found: NO 00:02:48.149 Library execinfo found: NO 00:02:48.149 Has header "execinfo.h" : YES 00:02:48.149 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:48.149 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:48.149 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:48.149 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:48.149 Run-time dependency openssl found: YES 3.0.9 00:02:48.149 Run-time dependency libpcap found: YES 1.10.4 00:02:48.149 Has header "pcap.h" with dependency libpcap: YES 00:02:48.149 Compiler for C supports arguments -Wcast-qual: YES 00:02:48.149 Compiler for C supports arguments -Wdeprecated: YES 00:02:48.149 Compiler for C supports arguments -Wformat: YES 00:02:48.149 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:48.149 Compiler for C supports arguments -Wformat-security: NO 00:02:48.149 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:48.149 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:48.149 Compiler for C supports arguments -Wnested-externs: YES 00:02:48.149 Compiler for C supports arguments -Wold-style-definition: YES 00:02:48.149 Compiler for C supports arguments -Wpointer-arith: YES 00:02:48.149 Compiler for C supports arguments -Wsign-compare: YES 00:02:48.149 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:48.149 Compiler for C supports arguments -Wundef: YES 00:02:48.149 Compiler for C supports arguments -Wwrite-strings: YES 00:02:48.149 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:48.149 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:48.149 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:48.149 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:48.149 Program objdump found: YES (/usr/bin/objdump) 00:02:48.149 Compiler for C supports arguments -mavx512f: YES 00:02:48.149 Checking if "AVX512 checking" compiles: YES 00:02:48.149 Fetching value of define "__SSE4_2__" : 1 00:02:48.149 Fetching value of define "__AES__" : 1 00:02:48.149 Fetching value of define "__AVX__" : 1 00:02:48.149 Fetching value of define "__AVX2__" : 1 00:02:48.149 Fetching value of define "__AVX512BW__" : 1 00:02:48.149 Fetching value of define "__AVX512CD__" : 1 00:02:48.149 Fetching value of define "__AVX512DQ__" : 1 00:02:48.149 Fetching value of define "__AVX512F__" : 1 00:02:48.149 Fetching value of define "__AVX512VL__" : 1 00:02:48.149 Fetching value of define "__PCLMUL__" : 1 00:02:48.149 Fetching value of define "__RDRND__" : 1 00:02:48.149 Fetching value of define "__RDSEED__" : 1 00:02:48.149 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:48.149 Fetching value of define "__znver1__" : (undefined) 00:02:48.149 Fetching value of define "__znver2__" : (undefined) 00:02:48.149 Fetching value of define "__znver3__" : (undefined) 00:02:48.149 Fetching value of define "__znver4__" : (undefined) 00:02:48.149 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:48.149 Message: lib/log: Defining dependency "log" 00:02:48.149 Message: lib/kvargs: Defining dependency "kvargs" 00:02:48.149 Message: lib/telemetry: Defining dependency "telemetry" 00:02:48.149 Checking for function "getentropy" : NO 00:02:48.149 Message: lib/eal: Defining dependency "eal" 00:02:48.149 Message: lib/ring: Defining dependency "ring" 00:02:48.149 Message: lib/rcu: Defining dependency "rcu" 00:02:48.149 Message: lib/mempool: Defining dependency "mempool" 00:02:48.149 Message: lib/mbuf: Defining dependency "mbuf" 00:02:48.149 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:48.149 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:48.149 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:48.149 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:48.149 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:48.149 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:48.149 Compiler for C supports arguments -mpclmul: YES 00:02:48.149 Compiler for C supports arguments -maes: YES 00:02:48.149 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:48.149 Compiler for C supports arguments -mavx512bw: YES 00:02:48.149 Compiler for C supports arguments -mavx512dq: YES 00:02:48.149 Compiler for C supports arguments -mavx512vl: YES 00:02:48.149 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:48.149 Compiler for C supports arguments -mavx2: YES 00:02:48.149 Compiler for C supports arguments -mavx: YES 00:02:48.149 Message: lib/net: Defining dependency "net" 00:02:48.149 Message: lib/meter: Defining dependency "meter" 00:02:48.149 Message: lib/ethdev: Defining dependency "ethdev" 00:02:48.149 Message: lib/pci: Defining dependency "pci" 00:02:48.149 Message: lib/cmdline: Defining dependency "cmdline" 00:02:48.149 Message: lib/hash: Defining dependency "hash" 00:02:48.149 Message: lib/timer: Defining dependency "timer" 00:02:48.149 Message: lib/compressdev: Defining dependency "compressdev" 00:02:48.149 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:48.149 Message: lib/dmadev: Defining dependency "dmadev" 00:02:48.149 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:48.149 Message: lib/power: Defining dependency "power" 00:02:48.149 Message: lib/reorder: Defining dependency "reorder" 00:02:48.149 Message: lib/security: Defining dependency "security" 00:02:48.149 Has header "linux/userfaultfd.h" : YES 00:02:48.149 Has header "linux/vduse.h" : YES 00:02:48.149 Message: lib/vhost: Defining dependency "vhost" 00:02:48.149 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:48.149 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:48.149 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:48.149 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:48.149 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:48.149 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:48.149 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:48.149 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:48.149 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:48.149 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:48.149 Program doxygen found: YES (/usr/bin/doxygen) 00:02:48.149 Configuring doxy-api-html.conf using configuration 00:02:48.149 Configuring doxy-api-man.conf using configuration 00:02:48.149 Program mandb found: YES (/usr/bin/mandb) 00:02:48.149 Program sphinx-build found: NO 00:02:48.149 Configuring rte_build_config.h using configuration 00:02:48.149 Message: 00:02:48.149 ================= 00:02:48.149 Applications Enabled 00:02:48.149 ================= 00:02:48.149 00:02:48.149 apps: 00:02:48.149 00:02:48.149 00:02:48.149 Message: 00:02:48.149 ================= 00:02:48.149 Libraries Enabled 00:02:48.149 ================= 00:02:48.149 00:02:48.149 libs: 00:02:48.149 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:48.149 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:48.149 cryptodev, dmadev, power, reorder, security, vhost, 00:02:48.149 00:02:48.149 Message: 00:02:48.149 =============== 00:02:48.149 Drivers Enabled 00:02:48.149 =============== 00:02:48.149 00:02:48.149 common: 00:02:48.149 00:02:48.149 bus: 00:02:48.149 pci, vdev, 00:02:48.149 mempool: 00:02:48.149 ring, 00:02:48.149 dma: 00:02:48.149 00:02:48.149 net: 00:02:48.149 00:02:48.149 crypto: 00:02:48.149 00:02:48.149 compress: 00:02:48.149 00:02:48.149 vdpa: 00:02:48.149 00:02:48.149 00:02:48.149 Message: 00:02:48.149 ================= 00:02:48.149 Content Skipped 00:02:48.149 ================= 00:02:48.149 00:02:48.149 apps: 00:02:48.149 dumpcap: explicitly disabled via build config 00:02:48.149 graph: explicitly disabled via build config 00:02:48.149 pdump: explicitly disabled via build config 00:02:48.149 proc-info: explicitly disabled via build config 00:02:48.149 test-acl: explicitly disabled via build config 00:02:48.149 test-bbdev: explicitly disabled via build config 00:02:48.149 test-cmdline: explicitly disabled via build config 00:02:48.149 test-compress-perf: explicitly disabled via build config 00:02:48.149 test-crypto-perf: explicitly disabled via build config 00:02:48.149 test-dma-perf: explicitly disabled via build config 00:02:48.149 test-eventdev: explicitly disabled via build config 00:02:48.149 test-fib: explicitly disabled via build config 00:02:48.149 test-flow-perf: explicitly disabled via build config 00:02:48.149 test-gpudev: explicitly disabled via build config 00:02:48.149 test-mldev: explicitly disabled via build config 00:02:48.149 test-pipeline: explicitly disabled via build config 00:02:48.149 test-pmd: explicitly disabled via build config 00:02:48.149 test-regex: explicitly disabled via build config 00:02:48.150 test-sad: explicitly disabled via build config 00:02:48.150 test-security-perf: explicitly disabled via build config 00:02:48.150 00:02:48.150 libs: 00:02:48.150 argparse: explicitly disabled via build config 00:02:48.150 metrics: explicitly disabled via build config 00:02:48.150 acl: explicitly disabled via build config 00:02:48.150 bbdev: explicitly disabled via build config 00:02:48.150 bitratestats: explicitly disabled via build config 00:02:48.150 bpf: explicitly disabled via build config 00:02:48.150 cfgfile: explicitly disabled via build config 00:02:48.150 distributor: explicitly disabled via build config 00:02:48.150 efd: explicitly disabled via build config 00:02:48.150 eventdev: explicitly disabled via build config 00:02:48.150 dispatcher: explicitly disabled via build config 00:02:48.150 gpudev: explicitly disabled via build config 00:02:48.150 gro: explicitly disabled via build config 00:02:48.150 gso: explicitly disabled via build config 00:02:48.150 ip_frag: explicitly disabled via build config 00:02:48.150 jobstats: explicitly disabled via build config 00:02:48.150 latencystats: explicitly disabled via build config 00:02:48.150 lpm: explicitly disabled via build config 00:02:48.150 member: explicitly disabled via build config 00:02:48.150 pcapng: explicitly disabled via build config 00:02:48.150 rawdev: explicitly disabled via build config 00:02:48.150 regexdev: explicitly disabled via build config 00:02:48.150 mldev: explicitly disabled via build config 00:02:48.150 rib: explicitly disabled via build config 00:02:48.150 sched: explicitly disabled via build config 00:02:48.150 stack: explicitly disabled via build config 00:02:48.150 ipsec: explicitly disabled via build config 00:02:48.150 pdcp: explicitly disabled via build config 00:02:48.150 fib: explicitly disabled via build config 00:02:48.150 port: explicitly disabled via build config 00:02:48.150 pdump: explicitly disabled via build config 00:02:48.150 table: explicitly disabled via build config 00:02:48.150 pipeline: explicitly disabled via build config 00:02:48.150 graph: explicitly disabled via build config 00:02:48.150 node: explicitly disabled via build config 00:02:48.150 00:02:48.150 drivers: 00:02:48.150 common/cpt: not in enabled drivers build config 00:02:48.150 common/dpaax: not in enabled drivers build config 00:02:48.150 common/iavf: not in enabled drivers build config 00:02:48.150 common/idpf: not in enabled drivers build config 00:02:48.150 common/ionic: not in enabled drivers build config 00:02:48.150 common/mvep: not in enabled drivers build config 00:02:48.150 common/octeontx: not in enabled drivers build config 00:02:48.150 bus/auxiliary: not in enabled drivers build config 00:02:48.150 bus/cdx: not in enabled drivers build config 00:02:48.150 bus/dpaa: not in enabled drivers build config 00:02:48.150 bus/fslmc: not in enabled drivers build config 00:02:48.150 bus/ifpga: not in enabled drivers build config 00:02:48.150 bus/platform: not in enabled drivers build config 00:02:48.150 bus/uacce: not in enabled drivers build config 00:02:48.150 bus/vmbus: not in enabled drivers build config 00:02:48.150 common/cnxk: not in enabled drivers build config 00:02:48.150 common/mlx5: not in enabled drivers build config 00:02:48.150 common/nfp: not in enabled drivers build config 00:02:48.150 common/nitrox: not in enabled drivers build config 00:02:48.150 common/qat: not in enabled drivers build config 00:02:48.150 common/sfc_efx: not in enabled drivers build config 00:02:48.150 mempool/bucket: not in enabled drivers build config 00:02:48.150 mempool/cnxk: not in enabled drivers build config 00:02:48.150 mempool/dpaa: not in enabled drivers build config 00:02:48.150 mempool/dpaa2: not in enabled drivers build config 00:02:48.150 mempool/octeontx: not in enabled drivers build config 00:02:48.150 mempool/stack: not in enabled drivers build config 00:02:48.150 dma/cnxk: not in enabled drivers build config 00:02:48.150 dma/dpaa: not in enabled drivers build config 00:02:48.150 dma/dpaa2: not in enabled drivers build config 00:02:48.150 dma/hisilicon: not in enabled drivers build config 00:02:48.150 dma/idxd: not in enabled drivers build config 00:02:48.150 dma/ioat: not in enabled drivers build config 00:02:48.150 dma/skeleton: not in enabled drivers build config 00:02:48.150 net/af_packet: not in enabled drivers build config 00:02:48.150 net/af_xdp: not in enabled drivers build config 00:02:48.150 net/ark: not in enabled drivers build config 00:02:48.150 net/atlantic: not in enabled drivers build config 00:02:48.150 net/avp: not in enabled drivers build config 00:02:48.150 net/axgbe: not in enabled drivers build config 00:02:48.150 net/bnx2x: not in enabled drivers build config 00:02:48.150 net/bnxt: not in enabled drivers build config 00:02:48.150 net/bonding: not in enabled drivers build config 00:02:48.150 net/cnxk: not in enabled drivers build config 00:02:48.150 net/cpfl: not in enabled drivers build config 00:02:48.150 net/cxgbe: not in enabled drivers build config 00:02:48.150 net/dpaa: not in enabled drivers build config 00:02:48.150 net/dpaa2: not in enabled drivers build config 00:02:48.150 net/e1000: not in enabled drivers build config 00:02:48.150 net/ena: not in enabled drivers build config 00:02:48.150 net/enetc: not in enabled drivers build config 00:02:48.150 net/enetfec: not in enabled drivers build config 00:02:48.150 net/enic: not in enabled drivers build config 00:02:48.150 net/failsafe: not in enabled drivers build config 00:02:48.150 net/fm10k: not in enabled drivers build config 00:02:48.150 net/gve: not in enabled drivers build config 00:02:48.150 net/hinic: not in enabled drivers build config 00:02:48.150 net/hns3: not in enabled drivers build config 00:02:48.150 net/i40e: not in enabled drivers build config 00:02:48.150 net/iavf: not in enabled drivers build config 00:02:48.150 net/ice: not in enabled drivers build config 00:02:48.150 net/idpf: not in enabled drivers build config 00:02:48.150 net/igc: not in enabled drivers build config 00:02:48.150 net/ionic: not in enabled drivers build config 00:02:48.150 net/ipn3ke: not in enabled drivers build config 00:02:48.150 net/ixgbe: not in enabled drivers build config 00:02:48.150 net/mana: not in enabled drivers build config 00:02:48.150 net/memif: not in enabled drivers build config 00:02:48.150 net/mlx4: not in enabled drivers build config 00:02:48.150 net/mlx5: not in enabled drivers build config 00:02:48.150 net/mvneta: not in enabled drivers build config 00:02:48.150 net/mvpp2: not in enabled drivers build config 00:02:48.150 net/netvsc: not in enabled drivers build config 00:02:48.150 net/nfb: not in enabled drivers build config 00:02:48.150 net/nfp: not in enabled drivers build config 00:02:48.150 net/ngbe: not in enabled drivers build config 00:02:48.150 net/null: not in enabled drivers build config 00:02:48.150 net/octeontx: not in enabled drivers build config 00:02:48.150 net/octeon_ep: not in enabled drivers build config 00:02:48.150 net/pcap: not in enabled drivers build config 00:02:48.150 net/pfe: not in enabled drivers build config 00:02:48.150 net/qede: not in enabled drivers build config 00:02:48.150 net/ring: not in enabled drivers build config 00:02:48.150 net/sfc: not in enabled drivers build config 00:02:48.150 net/softnic: not in enabled drivers build config 00:02:48.150 net/tap: not in enabled drivers build config 00:02:48.150 net/thunderx: not in enabled drivers build config 00:02:48.150 net/txgbe: not in enabled drivers build config 00:02:48.150 net/vdev_netvsc: not in enabled drivers build config 00:02:48.150 net/vhost: not in enabled drivers build config 00:02:48.150 net/virtio: not in enabled drivers build config 00:02:48.150 net/vmxnet3: not in enabled drivers build config 00:02:48.150 raw/*: missing internal dependency, "rawdev" 00:02:48.150 crypto/armv8: not in enabled drivers build config 00:02:48.150 crypto/bcmfs: not in enabled drivers build config 00:02:48.150 crypto/caam_jr: not in enabled drivers build config 00:02:48.150 crypto/ccp: not in enabled drivers build config 00:02:48.150 crypto/cnxk: not in enabled drivers build config 00:02:48.150 crypto/dpaa_sec: not in enabled drivers build config 00:02:48.150 crypto/dpaa2_sec: not in enabled drivers build config 00:02:48.150 crypto/ipsec_mb: not in enabled drivers build config 00:02:48.150 crypto/mlx5: not in enabled drivers build config 00:02:48.150 crypto/mvsam: not in enabled drivers build config 00:02:48.150 crypto/nitrox: not in enabled drivers build config 00:02:48.150 crypto/null: not in enabled drivers build config 00:02:48.150 crypto/octeontx: not in enabled drivers build config 00:02:48.150 crypto/openssl: not in enabled drivers build config 00:02:48.150 crypto/scheduler: not in enabled drivers build config 00:02:48.150 crypto/uadk: not in enabled drivers build config 00:02:48.150 crypto/virtio: not in enabled drivers build config 00:02:48.150 compress/isal: not in enabled drivers build config 00:02:48.150 compress/mlx5: not in enabled drivers build config 00:02:48.150 compress/nitrox: not in enabled drivers build config 00:02:48.150 compress/octeontx: not in enabled drivers build config 00:02:48.150 compress/zlib: not in enabled drivers build config 00:02:48.150 regex/*: missing internal dependency, "regexdev" 00:02:48.150 ml/*: missing internal dependency, "mldev" 00:02:48.150 vdpa/ifc: not in enabled drivers build config 00:02:48.150 vdpa/mlx5: not in enabled drivers build config 00:02:48.150 vdpa/nfp: not in enabled drivers build config 00:02:48.150 vdpa/sfc: not in enabled drivers build config 00:02:48.150 event/*: missing internal dependency, "eventdev" 00:02:48.150 baseband/*: missing internal dependency, "bbdev" 00:02:48.150 gpu/*: missing internal dependency, "gpudev" 00:02:48.150 00:02:48.150 00:02:48.150 Build targets in project: 84 00:02:48.150 00:02:48.150 DPDK 24.03.0 00:02:48.150 00:02:48.150 User defined options 00:02:48.150 buildtype : debug 00:02:48.150 default_library : shared 00:02:48.150 libdir : lib 00:02:48.150 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:48.150 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:48.150 c_link_args : 00:02:48.150 cpu_instruction_set: native 00:02:48.150 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:48.150 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:48.150 enable_docs : false 00:02:48.150 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:48.150 enable_kmods : false 00:02:48.150 max_lcores : 128 00:02:48.150 tests : false 00:02:48.150 00:02:48.150 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.410 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:48.674 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:48.674 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:48.674 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:48.674 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:48.674 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:48.674 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:48.674 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:48.674 [8/267] Linking static target lib/librte_kvargs.a 00:02:48.674 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.674 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:48.674 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:48.674 [12/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.674 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:48.674 [14/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:48.674 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:48.674 [16/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:48.674 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.674 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:48.674 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:48.674 [20/267] Linking static target lib/librte_log.a 00:02:48.674 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:48.674 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:48.674 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:48.674 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:48.674 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:48.674 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:48.674 [27/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:48.931 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:48.931 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:48.931 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:48.931 [31/267] Linking static target lib/librte_pci.a 00:02:48.931 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:48.931 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:48.931 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:48.931 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:48.931 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:48.931 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:48.931 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:48.931 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.931 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.931 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:48.931 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:48.931 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:48.931 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:48.931 [45/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:48.931 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:49.190 [47/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.190 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:49.190 [49/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:49.190 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:49.190 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:49.190 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:49.191 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:49.191 [54/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.191 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:49.191 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:49.191 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:49.191 [58/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:49.191 [59/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:49.191 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:49.191 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:49.191 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:49.191 [63/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:49.191 [64/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.191 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:49.191 [66/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:49.191 [67/267] Linking static target lib/librte_telemetry.a 00:02:49.191 [68/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:49.191 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:49.191 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:49.191 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:49.191 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:49.191 [73/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:49.191 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:49.191 [75/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:49.191 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:49.191 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:49.191 [78/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:49.191 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:49.191 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:49.191 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:49.191 [82/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:49.191 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:49.191 [84/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:49.191 [85/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.191 [86/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:49.191 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:49.191 [88/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:49.191 [89/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.191 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:49.191 [91/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.191 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:49.191 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:49.191 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:49.191 [95/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.191 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.191 [97/267] Linking static target lib/librte_meter.a 00:02:49.191 [98/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:49.191 [99/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:49.191 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:49.191 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:49.191 [102/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.191 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.191 [104/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:49.191 [105/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.191 [106/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.191 [107/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:49.191 [108/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:49.191 [109/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:49.191 [110/267] Linking static target lib/librte_mempool.a 00:02:49.191 [111/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:49.191 [112/267] Linking static target lib/librte_ring.a 00:02:49.191 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:49.191 [114/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.191 [115/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:49.191 [116/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:49.191 [117/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:49.191 [118/267] Linking static target lib/librte_timer.a 00:02:49.191 [119/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:49.191 [120/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:49.191 [121/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.191 [122/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.191 [123/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:49.191 [124/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.191 [125/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:49.191 [126/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.191 [127/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:49.191 [128/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:49.191 [129/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.191 [130/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:49.191 [131/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:49.191 [132/267] Linking static target lib/librte_cmdline.a 00:02:49.191 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:49.191 [134/267] Linking static target lib/librte_compressdev.a 00:02:49.191 [135/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:49.191 [136/267] Linking static target lib/librte_net.a 00:02:49.191 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.191 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:49.191 [139/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:49.191 [140/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:49.191 [141/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:49.191 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:49.191 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:49.191 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:49.191 [145/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.191 [146/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:49.191 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:49.450 [148/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:49.450 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:49.450 [150/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:49.450 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:49.450 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:49.450 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:49.450 [154/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.450 [155/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:49.450 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:49.450 [157/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:49.450 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:49.450 [159/267] Linking static target lib/librte_power.a 00:02:49.450 [160/267] Linking static target lib/librte_dmadev.a 00:02:49.450 [161/267] Linking target lib/librte_log.so.24.1 00:02:49.450 [162/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.450 [163/267] Linking static target lib/librte_rcu.a 00:02:49.450 [164/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:49.450 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:49.450 [166/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:49.450 [167/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:49.450 [168/267] Linking static target lib/librte_security.a 00:02:49.450 [169/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:49.450 [170/267] Linking static target lib/librte_eal.a 00:02:49.450 [171/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:49.450 [172/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:49.450 [173/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:49.450 [174/267] Linking static target lib/librte_reorder.a 00:02:49.450 [175/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:49.450 [176/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:49.450 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:49.450 [178/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:49.450 [179/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:49.450 [180/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.450 [181/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:49.450 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:49.450 [183/267] Linking static target lib/librte_hash.a 00:02:49.450 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:49.450 [185/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:49.450 [186/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:49.450 [187/267] Linking static target lib/librte_mbuf.a 00:02:49.450 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:49.450 [189/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.450 [190/267] Linking target lib/librte_kvargs.so.24.1 00:02:49.450 [191/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:49.450 [192/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.709 [193/267] Linking static target drivers/librte_bus_vdev.a 00:02:49.709 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:49.709 [195/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.709 [196/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:49.709 [197/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.709 [198/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:49.709 [199/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:49.709 [200/267] Linking static target drivers/librte_mempool_ring.a 00:02:49.709 [201/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.709 [202/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:49.709 [203/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.709 [204/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:49.709 [205/267] Linking static target lib/librte_cryptodev.a 00:02:49.709 [206/267] Linking target lib/librte_telemetry.so.24.1 00:02:49.709 [207/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.709 [208/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.709 [209/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:49.709 [210/267] Linking static target drivers/librte_bus_pci.a 00:02:49.709 [211/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.969 [212/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:49.969 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.969 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.969 [215/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.969 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.969 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.229 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.229 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.229 [220/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.229 [221/267] Linking static target lib/librte_ethdev.a 00:02:50.229 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.490 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.490 [224/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.490 [225/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.490 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.750 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:50.750 [228/267] Linking static target lib/librte_vhost.a 00:02:52.137 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.079 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.674 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.058 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.058 [233/267] Linking target lib/librte_eal.so.24.1 00:03:01.318 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:01.318 [235/267] Linking target lib/librte_dmadev.so.24.1 00:03:01.318 [236/267] Linking target lib/librte_ring.so.24.1 00:03:01.318 [237/267] Linking target lib/librte_meter.so.24.1 00:03:01.318 [238/267] Linking target lib/librte_pci.so.24.1 00:03:01.318 [239/267] Linking target lib/librte_timer.so.24.1 00:03:01.318 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:01.318 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:01.318 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:01.318 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:01.318 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:01.318 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:01.318 [246/267] Linking target lib/librte_mempool.so.24.1 00:03:01.318 [247/267] Linking target lib/librte_rcu.so.24.1 00:03:01.318 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:01.577 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:01.577 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:01.577 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:01.577 [252/267] Linking target lib/librte_mbuf.so.24.1 00:03:01.838 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:01.838 [254/267] Linking target lib/librte_cryptodev.so.24.1 00:03:01.838 [255/267] Linking target lib/librte_net.so.24.1 00:03:01.838 [256/267] Linking target lib/librte_reorder.so.24.1 00:03:01.838 [257/267] Linking target lib/librte_compressdev.so.24.1 00:03:01.838 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:01.838 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:01.838 [260/267] Linking target lib/librte_cmdline.so.24.1 00:03:02.099 [261/267] Linking target lib/librte_hash.so.24.1 00:03:02.099 [262/267] Linking target lib/librte_security.so.24.1 00:03:02.099 [263/267] Linking target lib/librte_ethdev.so.24.1 00:03:02.099 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:02.099 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:02.099 [266/267] Linking target lib/librte_power.so.24.1 00:03:02.099 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:02.099 INFO: autodetecting backend as ninja 00:03:02.099 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 144 00:03:03.484 CC lib/log/log.o 00:03:03.484 CC lib/log/log_flags.o 00:03:03.484 CC lib/log/log_deprecated.o 00:03:03.484 CC lib/ut/ut.o 00:03:03.484 CC lib/ut_mock/mock.o 00:03:03.484 LIB libspdk_log.a 00:03:03.484 LIB libspdk_ut_mock.a 00:03:03.484 LIB libspdk_ut.a 00:03:03.484 SO libspdk_log.so.7.0 00:03:03.484 SO libspdk_ut_mock.so.6.0 00:03:03.484 SO libspdk_ut.so.2.0 00:03:03.484 SYMLINK libspdk_log.so 00:03:03.484 SYMLINK libspdk_ut_mock.so 00:03:03.484 SYMLINK libspdk_ut.so 00:03:04.055 CC lib/ioat/ioat.o 00:03:04.055 CC lib/util/base64.o 00:03:04.055 CC lib/util/bit_array.o 00:03:04.055 CC lib/util/cpuset.o 00:03:04.055 CC lib/util/crc16.o 00:03:04.055 CC lib/dma/dma.o 00:03:04.055 CC lib/util/crc32.o 00:03:04.055 CC lib/util/crc32c.o 00:03:04.055 CC lib/util/crc32_ieee.o 00:03:04.055 CXX lib/trace_parser/trace.o 00:03:04.055 CC lib/util/crc64.o 00:03:04.055 CC lib/util/dif.o 00:03:04.055 CC lib/util/fd.o 00:03:04.055 CC lib/util/file.o 00:03:04.055 CC lib/util/hexlify.o 00:03:04.055 CC lib/util/iov.o 00:03:04.055 CC lib/util/math.o 00:03:04.055 CC lib/util/pipe.o 00:03:04.055 CC lib/util/strerror_tls.o 00:03:04.055 CC lib/util/string.o 00:03:04.055 CC lib/util/uuid.o 00:03:04.055 CC lib/util/xor.o 00:03:04.055 CC lib/util/fd_group.o 00:03:04.055 CC lib/util/zipf.o 00:03:04.055 CC lib/vfio_user/host/vfio_user_pci.o 00:03:04.055 CC lib/vfio_user/host/vfio_user.o 00:03:04.055 LIB libspdk_dma.a 00:03:04.055 SO libspdk_dma.so.4.0 00:03:04.315 LIB libspdk_ioat.a 00:03:04.315 SO libspdk_ioat.so.7.0 00:03:04.315 SYMLINK libspdk_dma.so 00:03:04.315 SYMLINK libspdk_ioat.so 00:03:04.315 LIB libspdk_vfio_user.a 00:03:04.315 SO libspdk_vfio_user.so.5.0 00:03:04.315 LIB libspdk_util.a 00:03:04.577 SYMLINK libspdk_vfio_user.so 00:03:04.577 SO libspdk_util.so.9.1 00:03:04.577 SYMLINK libspdk_util.so 00:03:04.839 LIB libspdk_trace_parser.a 00:03:04.839 SO libspdk_trace_parser.so.5.0 00:03:04.839 SYMLINK libspdk_trace_parser.so 00:03:05.099 CC lib/vmd/vmd.o 00:03:05.099 CC lib/vmd/led.o 00:03:05.099 CC lib/rdma_utils/rdma_utils.o 00:03:05.099 CC lib/rdma_provider/common.o 00:03:05.099 CC lib/conf/conf.o 00:03:05.099 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:05.099 CC lib/json/json_parse.o 00:03:05.099 CC lib/idxd/idxd.o 00:03:05.099 CC lib/json/json_util.o 00:03:05.099 CC lib/idxd/idxd_user.o 00:03:05.099 CC lib/json/json_write.o 00:03:05.099 CC lib/idxd/idxd_kernel.o 00:03:05.099 CC lib/env_dpdk/env.o 00:03:05.099 CC lib/env_dpdk/memory.o 00:03:05.099 CC lib/env_dpdk/pci.o 00:03:05.099 CC lib/env_dpdk/init.o 00:03:05.099 CC lib/env_dpdk/threads.o 00:03:05.099 CC lib/env_dpdk/pci_ioat.o 00:03:05.099 CC lib/env_dpdk/pci_virtio.o 00:03:05.099 CC lib/env_dpdk/pci_vmd.o 00:03:05.099 CC lib/env_dpdk/pci_idxd.o 00:03:05.099 CC lib/env_dpdk/pci_event.o 00:03:05.099 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:05.099 CC lib/env_dpdk/sigbus_handler.o 00:03:05.099 CC lib/env_dpdk/pci_dpdk.o 00:03:05.099 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:05.360 LIB libspdk_rdma_provider.a 00:03:05.360 LIB libspdk_conf.a 00:03:05.360 SO libspdk_rdma_provider.so.6.0 00:03:05.360 LIB libspdk_rdma_utils.a 00:03:05.360 SO libspdk_conf.so.6.0 00:03:05.360 LIB libspdk_json.a 00:03:05.360 SO libspdk_rdma_utils.so.1.0 00:03:05.360 SYMLINK libspdk_rdma_provider.so 00:03:05.360 SO libspdk_json.so.6.0 00:03:05.360 SYMLINK libspdk_conf.so 00:03:05.360 SYMLINK libspdk_rdma_utils.so 00:03:05.360 SYMLINK libspdk_json.so 00:03:05.621 LIB libspdk_idxd.a 00:03:05.621 SO libspdk_idxd.so.12.0 00:03:05.621 LIB libspdk_vmd.a 00:03:05.621 SO libspdk_vmd.so.6.0 00:03:05.621 SYMLINK libspdk_idxd.so 00:03:05.621 SYMLINK libspdk_vmd.so 00:03:05.882 CC lib/jsonrpc/jsonrpc_server.o 00:03:05.882 CC lib/jsonrpc/jsonrpc_client.o 00:03:05.882 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:05.882 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:06.143 LIB libspdk_jsonrpc.a 00:03:06.143 SO libspdk_jsonrpc.so.6.0 00:03:06.143 SYMLINK libspdk_jsonrpc.so 00:03:06.143 LIB libspdk_env_dpdk.a 00:03:06.404 SO libspdk_env_dpdk.so.14.1 00:03:06.404 SYMLINK libspdk_env_dpdk.so 00:03:06.404 CC lib/rpc/rpc.o 00:03:06.665 LIB libspdk_rpc.a 00:03:06.665 SO libspdk_rpc.so.6.0 00:03:06.926 SYMLINK libspdk_rpc.so 00:03:07.187 CC lib/notify/notify.o 00:03:07.187 CC lib/notify/notify_rpc.o 00:03:07.187 CC lib/trace/trace.o 00:03:07.187 CC lib/trace/trace_flags.o 00:03:07.187 CC lib/trace/trace_rpc.o 00:03:07.187 CC lib/keyring/keyring.o 00:03:07.187 CC lib/keyring/keyring_rpc.o 00:03:07.447 LIB libspdk_notify.a 00:03:07.447 SO libspdk_notify.so.6.0 00:03:07.447 LIB libspdk_keyring.a 00:03:07.447 SYMLINK libspdk_notify.so 00:03:07.447 LIB libspdk_trace.a 00:03:07.447 SO libspdk_keyring.so.1.0 00:03:07.447 SO libspdk_trace.so.10.0 00:03:07.447 SYMLINK libspdk_keyring.so 00:03:07.707 SYMLINK libspdk_trace.so 00:03:07.968 CC lib/thread/thread.o 00:03:07.968 CC lib/thread/iobuf.o 00:03:07.968 CC lib/sock/sock.o 00:03:07.968 CC lib/sock/sock_rpc.o 00:03:08.229 LIB libspdk_sock.a 00:03:08.229 SO libspdk_sock.so.10.0 00:03:08.544 SYMLINK libspdk_sock.so 00:03:08.875 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:08.875 CC lib/nvme/nvme_ctrlr.o 00:03:08.875 CC lib/nvme/nvme_fabric.o 00:03:08.875 CC lib/nvme/nvme_ns_cmd.o 00:03:08.875 CC lib/nvme/nvme_ns.o 00:03:08.875 CC lib/nvme/nvme_pcie_common.o 00:03:08.875 CC lib/nvme/nvme_pcie.o 00:03:08.875 CC lib/nvme/nvme_qpair.o 00:03:08.875 CC lib/nvme/nvme.o 00:03:08.875 CC lib/nvme/nvme_quirks.o 00:03:08.875 CC lib/nvme/nvme_transport.o 00:03:08.875 CC lib/nvme/nvme_discovery.o 00:03:08.875 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:08.875 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:08.875 CC lib/nvme/nvme_tcp.o 00:03:08.875 CC lib/nvme/nvme_opal.o 00:03:08.875 CC lib/nvme/nvme_io_msg.o 00:03:08.875 CC lib/nvme/nvme_poll_group.o 00:03:08.875 CC lib/nvme/nvme_zns.o 00:03:08.875 CC lib/nvme/nvme_stubs.o 00:03:08.875 CC lib/nvme/nvme_auth.o 00:03:08.875 CC lib/nvme/nvme_rdma.o 00:03:08.875 CC lib/nvme/nvme_cuse.o 00:03:09.134 LIB libspdk_thread.a 00:03:09.134 SO libspdk_thread.so.10.1 00:03:09.393 SYMLINK libspdk_thread.so 00:03:09.653 CC lib/accel/accel_rpc.o 00:03:09.653 CC lib/accel/accel.o 00:03:09.653 CC lib/accel/accel_sw.o 00:03:09.653 CC lib/init/json_config.o 00:03:09.653 CC lib/init/subsystem.o 00:03:09.653 CC lib/init/subsystem_rpc.o 00:03:09.653 CC lib/init/rpc.o 00:03:09.653 CC lib/virtio/virtio.o 00:03:09.653 CC lib/virtio/virtio_vhost_user.o 00:03:09.653 CC lib/virtio/virtio_vfio_user.o 00:03:09.653 CC lib/blob/blobstore.o 00:03:09.653 CC lib/virtio/virtio_pci.o 00:03:09.653 CC lib/blob/request.o 00:03:09.653 CC lib/blob/zeroes.o 00:03:09.653 CC lib/blob/blob_bs_dev.o 00:03:09.912 LIB libspdk_init.a 00:03:09.912 SO libspdk_init.so.5.0 00:03:09.912 LIB libspdk_virtio.a 00:03:09.912 SO libspdk_virtio.so.7.0 00:03:09.912 SYMLINK libspdk_init.so 00:03:10.173 SYMLINK libspdk_virtio.so 00:03:10.434 CC lib/event/app.o 00:03:10.434 CC lib/event/reactor.o 00:03:10.434 CC lib/event/log_rpc.o 00:03:10.434 CC lib/event/app_rpc.o 00:03:10.434 CC lib/event/scheduler_static.o 00:03:10.434 LIB libspdk_accel.a 00:03:10.434 SO libspdk_accel.so.15.1 00:03:10.434 LIB libspdk_nvme.a 00:03:10.696 SYMLINK libspdk_accel.so 00:03:10.696 SO libspdk_nvme.so.13.1 00:03:10.696 LIB libspdk_event.a 00:03:10.696 SO libspdk_event.so.14.0 00:03:10.957 SYMLINK libspdk_event.so 00:03:10.957 SYMLINK libspdk_nvme.so 00:03:10.957 CC lib/bdev/bdev.o 00:03:10.957 CC lib/bdev/bdev_rpc.o 00:03:10.957 CC lib/bdev/bdev_zone.o 00:03:10.957 CC lib/bdev/part.o 00:03:10.957 CC lib/bdev/scsi_nvme.o 00:03:12.340 LIB libspdk_blob.a 00:03:12.340 SO libspdk_blob.so.11.0 00:03:12.340 SYMLINK libspdk_blob.so 00:03:12.601 CC lib/blobfs/blobfs.o 00:03:12.601 CC lib/blobfs/tree.o 00:03:12.601 CC lib/lvol/lvol.o 00:03:13.170 LIB libspdk_bdev.a 00:03:13.170 SO libspdk_bdev.so.15.1 00:03:13.170 SYMLINK libspdk_bdev.so 00:03:13.430 LIB libspdk_blobfs.a 00:03:13.430 SO libspdk_blobfs.so.10.0 00:03:13.430 LIB libspdk_lvol.a 00:03:13.430 SYMLINK libspdk_blobfs.so 00:03:13.430 SO libspdk_lvol.so.10.0 00:03:13.430 SYMLINK libspdk_lvol.so 00:03:13.689 CC lib/scsi/dev.o 00:03:13.689 CC lib/scsi/lun.o 00:03:13.689 CC lib/scsi/scsi.o 00:03:13.689 CC lib/scsi/port.o 00:03:13.689 CC lib/scsi/scsi_bdev.o 00:03:13.689 CC lib/scsi/scsi_pr.o 00:03:13.689 CC lib/scsi/scsi_rpc.o 00:03:13.689 CC lib/scsi/task.o 00:03:13.690 CC lib/nvmf/ctrlr.o 00:03:13.690 CC lib/ftl/ftl_core.o 00:03:13.690 CC lib/ftl/ftl_init.o 00:03:13.690 CC lib/nvmf/ctrlr_discovery.o 00:03:13.690 CC lib/nbd/nbd.o 00:03:13.690 CC lib/ftl/ftl_layout.o 00:03:13.690 CC lib/nvmf/ctrlr_bdev.o 00:03:13.690 CC lib/nbd/nbd_rpc.o 00:03:13.690 CC lib/ftl/ftl_debug.o 00:03:13.690 CC lib/nvmf/subsystem.o 00:03:13.690 CC lib/ftl/ftl_io.o 00:03:13.690 CC lib/nvmf/nvmf.o 00:03:13.690 CC lib/ftl/ftl_sb.o 00:03:13.690 CC lib/nvmf/nvmf_rpc.o 00:03:13.690 CC lib/ublk/ublk.o 00:03:13.690 CC lib/ftl/ftl_l2p.o 00:03:13.690 CC lib/nvmf/transport.o 00:03:13.690 CC lib/ublk/ublk_rpc.o 00:03:13.690 CC lib/ftl/ftl_l2p_flat.o 00:03:13.690 CC lib/nvmf/tcp.o 00:03:13.690 CC lib/nvmf/stubs.o 00:03:13.690 CC lib/ftl/ftl_nv_cache.o 00:03:13.690 CC lib/nvmf/mdns_server.o 00:03:13.690 CC lib/ftl/ftl_band.o 00:03:13.690 CC lib/ftl/ftl_band_ops.o 00:03:13.690 CC lib/nvmf/rdma.o 00:03:13.690 CC lib/nvmf/auth.o 00:03:13.690 CC lib/ftl/ftl_writer.o 00:03:13.690 CC lib/ftl/ftl_rq.o 00:03:13.690 CC lib/ftl/ftl_reloc.o 00:03:13.690 CC lib/ftl/ftl_l2p_cache.o 00:03:13.690 CC lib/ftl/ftl_p2l.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:13.690 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:13.690 CC lib/ftl/utils/ftl_conf.o 00:03:13.690 CC lib/ftl/utils/ftl_md.o 00:03:13.690 CC lib/ftl/utils/ftl_mempool.o 00:03:13.690 CC lib/ftl/utils/ftl_bitmap.o 00:03:13.690 CC lib/ftl/utils/ftl_property.o 00:03:13.690 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:13.690 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:13.690 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:13.690 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:13.690 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:13.690 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:13.690 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:13.690 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:13.690 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:13.690 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:13.690 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:13.690 CC lib/ftl/base/ftl_base_dev.o 00:03:13.690 CC lib/ftl/base/ftl_base_bdev.o 00:03:13.690 CC lib/ftl/ftl_trace.o 00:03:14.259 LIB libspdk_nbd.a 00:03:14.259 SO libspdk_nbd.so.7.0 00:03:14.259 LIB libspdk_scsi.a 00:03:14.259 SYMLINK libspdk_nbd.so 00:03:14.259 SO libspdk_scsi.so.9.0 00:03:14.259 LIB libspdk_ublk.a 00:03:14.259 SYMLINK libspdk_scsi.so 00:03:14.259 SO libspdk_ublk.so.3.0 00:03:14.518 SYMLINK libspdk_ublk.so 00:03:14.518 LIB libspdk_ftl.a 00:03:14.778 CC lib/vhost/vhost.o 00:03:14.778 CC lib/vhost/vhost_rpc.o 00:03:14.778 CC lib/vhost/vhost_scsi.o 00:03:14.778 CC lib/vhost/vhost_blk.o 00:03:14.778 CC lib/iscsi/conn.o 00:03:14.778 CC lib/vhost/rte_vhost_user.o 00:03:14.778 CC lib/iscsi/init_grp.o 00:03:14.778 CC lib/iscsi/iscsi.o 00:03:14.778 CC lib/iscsi/md5.o 00:03:14.778 CC lib/iscsi/portal_grp.o 00:03:14.778 CC lib/iscsi/param.o 00:03:14.778 CC lib/iscsi/tgt_node.o 00:03:14.778 CC lib/iscsi/iscsi_subsystem.o 00:03:14.778 CC lib/iscsi/iscsi_rpc.o 00:03:14.778 CC lib/iscsi/task.o 00:03:14.778 SO libspdk_ftl.so.9.0 00:03:15.037 SYMLINK libspdk_ftl.so 00:03:15.354 LIB libspdk_nvmf.a 00:03:15.614 SO libspdk_nvmf.so.18.1 00:03:15.614 LIB libspdk_vhost.a 00:03:15.614 SO libspdk_vhost.so.8.0 00:03:15.614 SYMLINK libspdk_nvmf.so 00:03:15.614 SYMLINK libspdk_vhost.so 00:03:15.874 LIB libspdk_iscsi.a 00:03:15.874 SO libspdk_iscsi.so.8.0 00:03:15.874 SYMLINK libspdk_iscsi.so 00:03:16.445 CC module/env_dpdk/env_dpdk_rpc.o 00:03:16.704 LIB libspdk_env_dpdk_rpc.a 00:03:16.704 CC module/accel/ioat/accel_ioat.o 00:03:16.704 CC module/accel/ioat/accel_ioat_rpc.o 00:03:16.704 CC module/accel/iaa/accel_iaa.o 00:03:16.704 CC module/accel/error/accel_error.o 00:03:16.704 CC module/accel/dsa/accel_dsa.o 00:03:16.704 CC module/accel/error/accel_error_rpc.o 00:03:16.704 CC module/accel/iaa/accel_iaa_rpc.o 00:03:16.704 CC module/accel/dsa/accel_dsa_rpc.o 00:03:16.704 CC module/keyring/linux/keyring.o 00:03:16.704 CC module/keyring/linux/keyring_rpc.o 00:03:16.704 CC module/keyring/file/keyring.o 00:03:16.704 CC module/keyring/file/keyring_rpc.o 00:03:16.704 CC module/scheduler/gscheduler/gscheduler.o 00:03:16.704 CC module/blob/bdev/blob_bdev.o 00:03:16.704 CC module/sock/posix/posix.o 00:03:16.704 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:16.704 SO libspdk_env_dpdk_rpc.so.6.0 00:03:16.704 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:16.704 SYMLINK libspdk_env_dpdk_rpc.so 00:03:16.964 LIB libspdk_accel_ioat.a 00:03:16.964 LIB libspdk_keyring_linux.a 00:03:16.964 LIB libspdk_keyring_file.a 00:03:16.964 LIB libspdk_scheduler_gscheduler.a 00:03:16.964 SO libspdk_accel_ioat.so.6.0 00:03:16.964 SO libspdk_keyring_file.so.1.0 00:03:16.964 SO libspdk_scheduler_gscheduler.so.4.0 00:03:16.964 LIB libspdk_accel_error.a 00:03:16.964 SO libspdk_keyring_linux.so.1.0 00:03:16.964 LIB libspdk_scheduler_dpdk_governor.a 00:03:16.964 LIB libspdk_accel_iaa.a 00:03:16.964 LIB libspdk_scheduler_dynamic.a 00:03:16.964 SO libspdk_accel_error.so.2.0 00:03:16.965 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:16.965 SYMLINK libspdk_accel_ioat.so 00:03:16.965 SO libspdk_accel_iaa.so.3.0 00:03:16.965 SYMLINK libspdk_scheduler_gscheduler.so 00:03:16.965 SYMLINK libspdk_keyring_file.so 00:03:16.965 LIB libspdk_accel_dsa.a 00:03:16.965 SO libspdk_scheduler_dynamic.so.4.0 00:03:16.965 SYMLINK libspdk_keyring_linux.so 00:03:16.965 LIB libspdk_blob_bdev.a 00:03:16.965 SO libspdk_accel_dsa.so.5.0 00:03:16.965 SYMLINK libspdk_scheduler_dynamic.so 00:03:16.965 SYMLINK libspdk_accel_error.so 00:03:16.965 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:16.965 SO libspdk_blob_bdev.so.11.0 00:03:16.965 SYMLINK libspdk_accel_iaa.so 00:03:16.965 SYMLINK libspdk_accel_dsa.so 00:03:17.225 SYMLINK libspdk_blob_bdev.so 00:03:17.487 LIB libspdk_sock_posix.a 00:03:17.487 SO libspdk_sock_posix.so.6.0 00:03:17.487 SYMLINK libspdk_sock_posix.so 00:03:17.746 CC module/bdev/null/bdev_null.o 00:03:17.746 CC module/bdev/null/bdev_null_rpc.o 00:03:17.746 CC module/bdev/gpt/gpt.o 00:03:17.746 CC module/bdev/passthru/vbdev_passthru.o 00:03:17.746 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:17.746 CC module/bdev/malloc/bdev_malloc.o 00:03:17.746 CC module/bdev/gpt/vbdev_gpt.o 00:03:17.746 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:17.746 CC module/bdev/delay/vbdev_delay.o 00:03:17.746 CC module/bdev/lvol/vbdev_lvol.o 00:03:17.746 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:17.746 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:17.746 CC module/bdev/iscsi/bdev_iscsi.o 00:03:17.746 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:17.746 CC module/bdev/error/vbdev_error.o 00:03:17.746 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:17.746 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:17.746 CC module/blobfs/bdev/blobfs_bdev.o 00:03:17.746 CC module/bdev/error/vbdev_error_rpc.o 00:03:17.746 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:17.746 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:17.746 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:17.746 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:17.746 CC module/bdev/nvme/bdev_nvme.o 00:03:17.746 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:17.746 CC module/bdev/ftl/bdev_ftl.o 00:03:17.746 CC module/bdev/nvme/nvme_rpc.o 00:03:17.746 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:17.746 CC module/bdev/nvme/bdev_mdns_client.o 00:03:17.746 CC module/bdev/split/vbdev_split.o 00:03:17.746 CC module/bdev/raid/bdev_raid_rpc.o 00:03:17.746 CC module/bdev/raid/bdev_raid.o 00:03:17.746 CC module/bdev/nvme/vbdev_opal.o 00:03:17.746 CC module/bdev/split/vbdev_split_rpc.o 00:03:17.746 CC module/bdev/aio/bdev_aio.o 00:03:17.746 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:17.746 CC module/bdev/aio/bdev_aio_rpc.o 00:03:17.746 CC module/bdev/raid/bdev_raid_sb.o 00:03:17.746 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:17.746 CC module/bdev/raid/raid0.o 00:03:17.746 CC module/bdev/raid/raid1.o 00:03:17.746 CC module/bdev/raid/concat.o 00:03:18.007 LIB libspdk_blobfs_bdev.a 00:03:18.007 LIB libspdk_bdev_null.a 00:03:18.007 LIB libspdk_bdev_split.a 00:03:18.007 SO libspdk_blobfs_bdev.so.6.0 00:03:18.007 LIB libspdk_bdev_gpt.a 00:03:18.007 SO libspdk_bdev_null.so.6.0 00:03:18.007 LIB libspdk_bdev_error.a 00:03:18.007 LIB libspdk_bdev_passthru.a 00:03:18.007 SO libspdk_bdev_gpt.so.6.0 00:03:18.007 SO libspdk_bdev_split.so.6.0 00:03:18.007 LIB libspdk_bdev_ftl.a 00:03:18.007 SO libspdk_bdev_error.so.6.0 00:03:18.007 SYMLINK libspdk_blobfs_bdev.so 00:03:18.007 SO libspdk_bdev_passthru.so.6.0 00:03:18.007 LIB libspdk_bdev_malloc.a 00:03:18.007 SO libspdk_bdev_ftl.so.6.0 00:03:18.007 LIB libspdk_bdev_aio.a 00:03:18.007 SYMLINK libspdk_bdev_null.so 00:03:18.007 LIB libspdk_bdev_zone_block.a 00:03:18.007 LIB libspdk_bdev_delay.a 00:03:18.007 SYMLINK libspdk_bdev_gpt.so 00:03:18.007 SYMLINK libspdk_bdev_split.so 00:03:18.007 SO libspdk_bdev_malloc.so.6.0 00:03:18.007 LIB libspdk_bdev_iscsi.a 00:03:18.007 SYMLINK libspdk_bdev_error.so 00:03:18.007 SO libspdk_bdev_aio.so.6.0 00:03:18.007 SO libspdk_bdev_zone_block.so.6.0 00:03:18.007 SYMLINK libspdk_bdev_passthru.so 00:03:18.007 SO libspdk_bdev_delay.so.6.0 00:03:18.007 SYMLINK libspdk_bdev_ftl.so 00:03:18.007 SO libspdk_bdev_iscsi.so.6.0 00:03:18.007 SYMLINK libspdk_bdev_malloc.so 00:03:18.007 SYMLINK libspdk_bdev_zone_block.so 00:03:18.007 SYMLINK libspdk_bdev_aio.so 00:03:18.007 LIB libspdk_bdev_virtio.a 00:03:18.007 LIB libspdk_bdev_lvol.a 00:03:18.007 SYMLINK libspdk_bdev_iscsi.so 00:03:18.007 SYMLINK libspdk_bdev_delay.so 00:03:18.267 SO libspdk_bdev_lvol.so.6.0 00:03:18.267 SO libspdk_bdev_virtio.so.6.0 00:03:18.267 SYMLINK libspdk_bdev_lvol.so 00:03:18.267 SYMLINK libspdk_bdev_virtio.so 00:03:18.528 LIB libspdk_bdev_raid.a 00:03:18.528 SO libspdk_bdev_raid.so.6.0 00:03:18.788 SYMLINK libspdk_bdev_raid.so 00:03:19.729 LIB libspdk_bdev_nvme.a 00:03:19.729 SO libspdk_bdev_nvme.so.7.0 00:03:19.729 SYMLINK libspdk_bdev_nvme.so 00:03:20.302 CC module/event/subsystems/sock/sock.o 00:03:20.302 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:20.302 CC module/event/subsystems/scheduler/scheduler.o 00:03:20.302 CC module/event/subsystems/iobuf/iobuf.o 00:03:20.302 CC module/event/subsystems/vmd/vmd.o 00:03:20.302 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:20.563 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:20.563 CC module/event/subsystems/keyring/keyring.o 00:03:20.563 LIB libspdk_event_scheduler.a 00:03:20.563 LIB libspdk_event_sock.a 00:03:20.563 LIB libspdk_event_vhost_blk.a 00:03:20.563 LIB libspdk_event_iobuf.a 00:03:20.563 LIB libspdk_event_keyring.a 00:03:20.563 LIB libspdk_event_vmd.a 00:03:20.563 SO libspdk_event_sock.so.5.0 00:03:20.563 SO libspdk_event_scheduler.so.4.0 00:03:20.563 SO libspdk_event_vhost_blk.so.3.0 00:03:20.563 SO libspdk_event_iobuf.so.3.0 00:03:20.563 SO libspdk_event_keyring.so.1.0 00:03:20.563 SO libspdk_event_vmd.so.6.0 00:03:20.563 SYMLINK libspdk_event_sock.so 00:03:20.824 SYMLINK libspdk_event_vhost_blk.so 00:03:20.824 SYMLINK libspdk_event_scheduler.so 00:03:20.824 SYMLINK libspdk_event_keyring.so 00:03:20.824 SYMLINK libspdk_event_iobuf.so 00:03:20.824 SYMLINK libspdk_event_vmd.so 00:03:21.086 CC module/event/subsystems/accel/accel.o 00:03:21.086 LIB libspdk_event_accel.a 00:03:21.347 SO libspdk_event_accel.so.6.0 00:03:21.347 SYMLINK libspdk_event_accel.so 00:03:21.608 CC module/event/subsystems/bdev/bdev.o 00:03:21.869 LIB libspdk_event_bdev.a 00:03:21.869 SO libspdk_event_bdev.so.6.0 00:03:21.869 SYMLINK libspdk_event_bdev.so 00:03:22.130 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:22.130 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:22.130 CC module/event/subsystems/scsi/scsi.o 00:03:22.130 CC module/event/subsystems/nbd/nbd.o 00:03:22.392 CC module/event/subsystems/ublk/ublk.o 00:03:22.392 LIB libspdk_event_nbd.a 00:03:22.392 LIB libspdk_event_scsi.a 00:03:22.392 LIB libspdk_event_ublk.a 00:03:22.392 SO libspdk_event_nbd.so.6.0 00:03:22.392 SO libspdk_event_scsi.so.6.0 00:03:22.392 SO libspdk_event_ublk.so.3.0 00:03:22.392 LIB libspdk_event_nvmf.a 00:03:22.392 SYMLINK libspdk_event_nbd.so 00:03:22.653 SO libspdk_event_nvmf.so.6.0 00:03:22.653 SYMLINK libspdk_event_scsi.so 00:03:22.653 SYMLINK libspdk_event_ublk.so 00:03:22.653 SYMLINK libspdk_event_nvmf.so 00:03:22.915 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:22.915 CC module/event/subsystems/iscsi/iscsi.o 00:03:22.915 LIB libspdk_event_vhost_scsi.a 00:03:23.176 SO libspdk_event_vhost_scsi.so.3.0 00:03:23.176 LIB libspdk_event_iscsi.a 00:03:23.176 SO libspdk_event_iscsi.so.6.0 00:03:23.176 SYMLINK libspdk_event_vhost_scsi.so 00:03:23.176 SYMLINK libspdk_event_iscsi.so 00:03:23.437 SO libspdk.so.6.0 00:03:23.437 SYMLINK libspdk.so 00:03:23.698 CXX app/trace/trace.o 00:03:23.698 CC app/spdk_nvme_discover/discovery_aer.o 00:03:23.698 CC app/trace_record/trace_record.o 00:03:23.698 CC app/spdk_top/spdk_top.o 00:03:23.698 CC app/spdk_lspci/spdk_lspci.o 00:03:23.698 TEST_HEADER include/spdk/accel.h 00:03:23.698 CC app/spdk_nvme_identify/identify.o 00:03:23.698 TEST_HEADER include/spdk/accel_module.h 00:03:23.698 TEST_HEADER include/spdk/assert.h 00:03:23.698 TEST_HEADER include/spdk/base64.h 00:03:23.698 CC test/rpc_client/rpc_client_test.o 00:03:23.698 TEST_HEADER include/spdk/barrier.h 00:03:23.698 TEST_HEADER include/spdk/bdev.h 00:03:23.698 CC app/spdk_nvme_perf/perf.o 00:03:23.698 TEST_HEADER include/spdk/bdev_module.h 00:03:23.698 TEST_HEADER include/spdk/bdev_zone.h 00:03:23.698 TEST_HEADER include/spdk/bit_array.h 00:03:23.698 TEST_HEADER include/spdk/bit_pool.h 00:03:23.698 TEST_HEADER include/spdk/blob_bdev.h 00:03:23.698 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:23.698 TEST_HEADER include/spdk/blob.h 00:03:23.698 TEST_HEADER include/spdk/blobfs.h 00:03:23.698 TEST_HEADER include/spdk/conf.h 00:03:23.698 TEST_HEADER include/spdk/config.h 00:03:23.698 TEST_HEADER include/spdk/cpuset.h 00:03:23.698 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:23.698 TEST_HEADER include/spdk/crc32.h 00:03:23.698 TEST_HEADER include/spdk/crc16.h 00:03:23.698 TEST_HEADER include/spdk/crc64.h 00:03:23.698 CC app/spdk_dd/spdk_dd.o 00:03:23.698 TEST_HEADER include/spdk/dif.h 00:03:23.698 TEST_HEADER include/spdk/dma.h 00:03:23.698 TEST_HEADER include/spdk/endian.h 00:03:23.698 TEST_HEADER include/spdk/env_dpdk.h 00:03:23.698 TEST_HEADER include/spdk/env.h 00:03:23.698 TEST_HEADER include/spdk/event.h 00:03:23.698 TEST_HEADER include/spdk/fd_group.h 00:03:23.698 TEST_HEADER include/spdk/fd.h 00:03:23.698 TEST_HEADER include/spdk/file.h 00:03:23.698 CC app/iscsi_tgt/iscsi_tgt.o 00:03:23.698 TEST_HEADER include/spdk/ftl.h 00:03:23.698 TEST_HEADER include/spdk/gpt_spec.h 00:03:23.698 TEST_HEADER include/spdk/hexlify.h 00:03:23.698 TEST_HEADER include/spdk/idxd.h 00:03:23.698 TEST_HEADER include/spdk/histogram_data.h 00:03:23.698 TEST_HEADER include/spdk/idxd_spec.h 00:03:23.698 CC app/nvmf_tgt/nvmf_main.o 00:03:23.698 TEST_HEADER include/spdk/init.h 00:03:23.698 TEST_HEADER include/spdk/ioat.h 00:03:23.698 TEST_HEADER include/spdk/iscsi_spec.h 00:03:23.698 TEST_HEADER include/spdk/ioat_spec.h 00:03:23.698 TEST_HEADER include/spdk/json.h 00:03:23.698 TEST_HEADER include/spdk/jsonrpc.h 00:03:23.698 TEST_HEADER include/spdk/keyring.h 00:03:23.698 TEST_HEADER include/spdk/keyring_module.h 00:03:23.698 TEST_HEADER include/spdk/likely.h 00:03:23.698 TEST_HEADER include/spdk/log.h 00:03:23.698 CC app/spdk_tgt/spdk_tgt.o 00:03:23.698 TEST_HEADER include/spdk/memory.h 00:03:23.698 TEST_HEADER include/spdk/lvol.h 00:03:23.698 TEST_HEADER include/spdk/mmio.h 00:03:23.698 TEST_HEADER include/spdk/nbd.h 00:03:23.698 TEST_HEADER include/spdk/notify.h 00:03:23.698 TEST_HEADER include/spdk/nvme.h 00:03:23.698 TEST_HEADER include/spdk/nvme_intel.h 00:03:23.698 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:23.698 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:23.698 TEST_HEADER include/spdk/nvme_spec.h 00:03:23.698 TEST_HEADER include/spdk/nvme_zns.h 00:03:23.698 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:23.698 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:23.698 TEST_HEADER include/spdk/nvmf.h 00:03:23.698 TEST_HEADER include/spdk/nvmf_spec.h 00:03:23.698 TEST_HEADER include/spdk/opal.h 00:03:23.698 TEST_HEADER include/spdk/nvmf_transport.h 00:03:23.698 TEST_HEADER include/spdk/opal_spec.h 00:03:23.698 TEST_HEADER include/spdk/pci_ids.h 00:03:23.958 TEST_HEADER include/spdk/pipe.h 00:03:23.958 TEST_HEADER include/spdk/queue.h 00:03:23.958 TEST_HEADER include/spdk/reduce.h 00:03:23.958 TEST_HEADER include/spdk/rpc.h 00:03:23.958 TEST_HEADER include/spdk/scheduler.h 00:03:23.959 TEST_HEADER include/spdk/scsi.h 00:03:23.959 TEST_HEADER include/spdk/scsi_spec.h 00:03:23.959 TEST_HEADER include/spdk/sock.h 00:03:23.959 TEST_HEADER include/spdk/stdinc.h 00:03:23.959 TEST_HEADER include/spdk/string.h 00:03:23.959 TEST_HEADER include/spdk/thread.h 00:03:23.959 TEST_HEADER include/spdk/trace.h 00:03:23.959 TEST_HEADER include/spdk/trace_parser.h 00:03:23.959 TEST_HEADER include/spdk/tree.h 00:03:23.959 TEST_HEADER include/spdk/ublk.h 00:03:23.959 TEST_HEADER include/spdk/uuid.h 00:03:23.959 TEST_HEADER include/spdk/util.h 00:03:23.959 TEST_HEADER include/spdk/version.h 00:03:23.959 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:23.959 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:23.959 TEST_HEADER include/spdk/vhost.h 00:03:23.959 TEST_HEADER include/spdk/vmd.h 00:03:23.959 TEST_HEADER include/spdk/xor.h 00:03:23.959 TEST_HEADER include/spdk/zipf.h 00:03:23.959 CXX test/cpp_headers/accel.o 00:03:23.959 CXX test/cpp_headers/accel_module.o 00:03:23.959 CXX test/cpp_headers/assert.o 00:03:23.959 CXX test/cpp_headers/barrier.o 00:03:23.959 CXX test/cpp_headers/base64.o 00:03:23.959 CXX test/cpp_headers/bdev.o 00:03:23.959 CXX test/cpp_headers/bdev_module.o 00:03:23.959 CXX test/cpp_headers/bdev_zone.o 00:03:23.959 CXX test/cpp_headers/bit_pool.o 00:03:23.959 CXX test/cpp_headers/bit_array.o 00:03:23.959 CXX test/cpp_headers/blob_bdev.o 00:03:23.959 CXX test/cpp_headers/blobfs_bdev.o 00:03:23.959 CXX test/cpp_headers/blobfs.o 00:03:23.959 CXX test/cpp_headers/blob.o 00:03:23.959 CXX test/cpp_headers/conf.o 00:03:23.959 CXX test/cpp_headers/config.o 00:03:23.959 CXX test/cpp_headers/cpuset.o 00:03:23.959 CXX test/cpp_headers/crc16.o 00:03:23.959 CXX test/cpp_headers/crc64.o 00:03:23.959 CXX test/cpp_headers/crc32.o 00:03:23.959 CXX test/cpp_headers/dif.o 00:03:23.959 CXX test/cpp_headers/endian.o 00:03:23.959 CXX test/cpp_headers/dma.o 00:03:23.959 CXX test/cpp_headers/event.o 00:03:23.959 CXX test/cpp_headers/env_dpdk.o 00:03:23.959 CXX test/cpp_headers/env.o 00:03:23.959 CXX test/cpp_headers/fd.o 00:03:23.959 CXX test/cpp_headers/fd_group.o 00:03:23.959 CXX test/cpp_headers/ftl.o 00:03:23.959 CXX test/cpp_headers/file.o 00:03:23.959 CXX test/cpp_headers/gpt_spec.o 00:03:23.959 CXX test/cpp_headers/hexlify.o 00:03:23.959 CXX test/cpp_headers/histogram_data.o 00:03:23.959 CXX test/cpp_headers/idxd.o 00:03:23.959 CXX test/cpp_headers/ioat.o 00:03:23.959 CXX test/cpp_headers/idxd_spec.o 00:03:23.959 CXX test/cpp_headers/init.o 00:03:23.959 CXX test/cpp_headers/ioat_spec.o 00:03:23.959 CXX test/cpp_headers/iscsi_spec.o 00:03:23.959 CXX test/cpp_headers/json.o 00:03:23.959 CXX test/cpp_headers/jsonrpc.o 00:03:23.959 CXX test/cpp_headers/keyring.o 00:03:23.959 CXX test/cpp_headers/keyring_module.o 00:03:23.959 CC examples/util/zipf/zipf.o 00:03:23.959 CXX test/cpp_headers/memory.o 00:03:23.959 CXX test/cpp_headers/likely.o 00:03:23.959 CXX test/cpp_headers/log.o 00:03:23.959 CXX test/cpp_headers/lvol.o 00:03:23.959 CXX test/cpp_headers/mmio.o 00:03:23.959 CXX test/cpp_headers/nvme.o 00:03:23.959 CXX test/cpp_headers/nbd.o 00:03:23.959 CXX test/cpp_headers/nvme_intel.o 00:03:23.959 CC test/thread/poller_perf/poller_perf.o 00:03:23.959 CXX test/cpp_headers/notify.o 00:03:23.959 CXX test/cpp_headers/nvme_ocssd.o 00:03:23.959 CXX test/cpp_headers/nvme_zns.o 00:03:23.959 CXX test/cpp_headers/nvme_spec.o 00:03:23.959 CC test/app/stub/stub.o 00:03:23.959 CC test/env/memory/memory_ut.o 00:03:23.959 CXX test/cpp_headers/nvmf_cmd.o 00:03:23.959 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:23.959 CXX test/cpp_headers/nvmf.o 00:03:23.959 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:23.959 CC app/fio/nvme/fio_plugin.o 00:03:23.959 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:23.959 CXX test/cpp_headers/nvmf_transport.o 00:03:23.959 CXX test/cpp_headers/nvmf_spec.o 00:03:23.959 CXX test/cpp_headers/opal.o 00:03:23.959 LINK spdk_lspci 00:03:23.959 CXX test/cpp_headers/pci_ids.o 00:03:23.959 CC examples/ioat/verify/verify.o 00:03:23.959 CXX test/cpp_headers/opal_spec.o 00:03:23.959 CXX test/cpp_headers/pipe.o 00:03:23.959 CC examples/ioat/perf/perf.o 00:03:23.959 CC test/env/vtophys/vtophys.o 00:03:23.959 CXX test/cpp_headers/queue.o 00:03:23.959 CXX test/cpp_headers/reduce.o 00:03:23.959 CXX test/cpp_headers/scsi.o 00:03:23.959 CXX test/cpp_headers/rpc.o 00:03:23.959 CXX test/cpp_headers/scheduler.o 00:03:23.959 CC test/app/jsoncat/jsoncat.o 00:03:23.959 CC test/app/histogram_perf/histogram_perf.o 00:03:23.959 CXX test/cpp_headers/scsi_spec.o 00:03:23.959 CXX test/cpp_headers/sock.o 00:03:23.959 CXX test/cpp_headers/stdinc.o 00:03:23.959 CC test/env/pci/pci_ut.o 00:03:23.959 CXX test/cpp_headers/string.o 00:03:23.959 CXX test/cpp_headers/thread.o 00:03:23.959 CXX test/cpp_headers/trace.o 00:03:23.959 CXX test/cpp_headers/tree.o 00:03:23.959 CXX test/cpp_headers/trace_parser.o 00:03:23.959 CXX test/cpp_headers/ublk.o 00:03:23.959 CXX test/cpp_headers/util.o 00:03:23.959 CXX test/cpp_headers/version.o 00:03:23.959 CXX test/cpp_headers/uuid.o 00:03:23.959 CXX test/cpp_headers/vfio_user_pci.o 00:03:23.959 CXX test/cpp_headers/vhost.o 00:03:23.959 CXX test/cpp_headers/vfio_user_spec.o 00:03:23.959 CXX test/cpp_headers/vmd.o 00:03:23.959 CXX test/cpp_headers/zipf.o 00:03:23.959 CXX test/cpp_headers/xor.o 00:03:23.959 CC app/fio/bdev/fio_plugin.o 00:03:23.959 LINK rpc_client_test 00:03:23.959 CC test/dma/test_dma/test_dma.o 00:03:23.959 LINK spdk_nvme_discover 00:03:23.959 CC test/app/bdev_svc/bdev_svc.o 00:03:24.218 LINK spdk_trace_record 00:03:24.218 LINK interrupt_tgt 00:03:24.218 LINK iscsi_tgt 00:03:24.218 LINK nvmf_tgt 00:03:24.218 LINK spdk_tgt 00:03:24.218 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:24.218 LINK zipf 00:03:24.218 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:24.218 CC test/env/mem_callbacks/mem_callbacks.o 00:03:24.218 LINK spdk_dd 00:03:24.477 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:24.477 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:24.477 LINK vtophys 00:03:24.477 LINK spdk_trace 00:03:24.477 LINK env_dpdk_post_init 00:03:24.477 LINK verify 00:03:24.477 LINK jsoncat 00:03:24.477 LINK poller_perf 00:03:24.477 LINK histogram_perf 00:03:24.477 LINK stub 00:03:24.477 LINK bdev_svc 00:03:24.738 LINK ioat_perf 00:03:24.738 LINK test_dma 00:03:24.738 CC app/vhost/vhost.o 00:03:24.738 CC examples/idxd/perf/perf.o 00:03:24.738 CC examples/sock/hello_world/hello_sock.o 00:03:24.738 LINK spdk_nvme_perf 00:03:24.738 CC examples/vmd/led/led.o 00:03:24.738 LINK pci_ut 00:03:24.738 CC examples/vmd/lsvmd/lsvmd.o 00:03:24.738 CC examples/thread/thread/thread_ex.o 00:03:24.998 LINK vhost_fuzz 00:03:24.998 LINK spdk_bdev 00:03:24.998 LINK nvme_fuzz 00:03:24.998 LINK spdk_nvme 00:03:24.998 LINK spdk_nvme_identify 00:03:24.998 LINK vhost 00:03:24.998 CC test/event/event_perf/event_perf.o 00:03:24.998 CC test/event/reactor_perf/reactor_perf.o 00:03:24.998 CC test/event/reactor/reactor.o 00:03:24.998 LINK lsvmd 00:03:24.998 LINK led 00:03:24.998 CC test/event/app_repeat/app_repeat.o 00:03:24.998 CC test/event/scheduler/scheduler.o 00:03:24.998 LINK spdk_top 00:03:24.998 LINK hello_sock 00:03:24.998 LINK mem_callbacks 00:03:24.998 LINK idxd_perf 00:03:25.259 LINK thread 00:03:25.259 CC test/nvme/reset/reset.o 00:03:25.259 CC test/nvme/sgl/sgl.o 00:03:25.259 CC test/nvme/e2edp/nvme_dp.o 00:03:25.259 CC test/nvme/cuse/cuse.o 00:03:25.259 CC test/nvme/startup/startup.o 00:03:25.259 CC test/nvme/boot_partition/boot_partition.o 00:03:25.259 CC test/nvme/overhead/overhead.o 00:03:25.259 CC test/nvme/err_injection/err_injection.o 00:03:25.259 LINK reactor_perf 00:03:25.259 CC test/nvme/aer/aer.o 00:03:25.259 CC test/nvme/compliance/nvme_compliance.o 00:03:25.259 LINK reactor 00:03:25.259 CC test/nvme/simple_copy/simple_copy.o 00:03:25.259 LINK event_perf 00:03:25.259 CC test/nvme/connect_stress/connect_stress.o 00:03:25.259 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.259 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:25.259 CC test/nvme/fdp/fdp.o 00:03:25.259 CC test/nvme/reserve/reserve.o 00:03:25.259 CC test/blobfs/mkfs/mkfs.o 00:03:25.259 LINK app_repeat 00:03:25.259 CC test/accel/dif/dif.o 00:03:25.259 LINK scheduler 00:03:25.259 CC test/lvol/esnap/esnap.o 00:03:25.259 LINK err_injection 00:03:25.259 LINK boot_partition 00:03:25.259 LINK startup 00:03:25.519 LINK connect_stress 00:03:25.519 LINK reset 00:03:25.519 LINK doorbell_aers 00:03:25.519 LINK reserve 00:03:25.519 LINK mkfs 00:03:25.519 LINK fused_ordering 00:03:25.519 LINK simple_copy 00:03:25.519 LINK sgl 00:03:25.519 LINK nvme_dp 00:03:25.519 LINK overhead 00:03:25.519 LINK aer 00:03:25.519 LINK nvme_compliance 00:03:25.519 LINK fdp 00:03:25.519 LINK memory_ut 00:03:25.519 CC examples/nvme/arbitration/arbitration.o 00:03:25.519 CC examples/nvme/hello_world/hello_world.o 00:03:25.519 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:25.519 CC examples/nvme/hotplug/hotplug.o 00:03:25.519 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:25.519 CC examples/nvme/abort/abort.o 00:03:25.519 CC examples/nvme/reconnect/reconnect.o 00:03:25.519 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:25.519 CC examples/accel/perf/accel_perf.o 00:03:25.519 LINK dif 00:03:25.781 CC examples/blob/cli/blobcli.o 00:03:25.781 CC examples/blob/hello_world/hello_blob.o 00:03:25.781 LINK pmr_persistence 00:03:25.781 LINK hello_world 00:03:25.781 LINK cmb_copy 00:03:25.781 LINK hotplug 00:03:25.781 LINK arbitration 00:03:25.781 LINK iscsi_fuzz 00:03:26.043 LINK hello_blob 00:03:26.043 LINK reconnect 00:03:26.043 LINK abort 00:03:26.043 LINK nvme_manage 00:03:26.043 LINK accel_perf 00:03:26.043 LINK blobcli 00:03:26.303 CC test/bdev/bdevio/bdevio.o 00:03:26.303 LINK cuse 00:03:26.562 LINK bdevio 00:03:26.562 CC examples/bdev/hello_world/hello_bdev.o 00:03:26.562 CC examples/bdev/bdevperf/bdevperf.o 00:03:26.823 LINK hello_bdev 00:03:27.395 LINK bdevperf 00:03:27.968 CC examples/nvmf/nvmf/nvmf.o 00:03:28.229 LINK nvmf 00:03:29.617 LINK esnap 00:03:29.878 00:03:29.878 real 0m50.460s 00:03:29.878 user 6m21.077s 00:03:29.878 sys 3m58.914s 00:03:29.878 10:11:06 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:29.878 10:11:06 make -- common/autotest_common.sh@10 -- $ set +x 00:03:29.878 ************************************ 00:03:29.878 END TEST make 00:03:29.878 ************************************ 00:03:29.878 10:11:06 -- common/autotest_common.sh@1142 -- $ return 0 00:03:29.878 10:11:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:29.878 10:11:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:29.878 10:11:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:29.878 10:11:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.878 10:11:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:29.878 10:11:06 -- pm/common@44 -- $ pid=2596988 00:03:29.878 10:11:06 -- pm/common@50 -- $ kill -TERM 2596988 00:03:29.878 10:11:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.878 10:11:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:29.878 10:11:06 -- pm/common@44 -- $ pid=2596989 00:03:29.878 10:11:06 -- pm/common@50 -- $ kill -TERM 2596989 00:03:29.878 10:11:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.878 10:11:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:29.878 10:11:06 -- pm/common@44 -- $ pid=2596991 00:03:29.878 10:11:06 -- pm/common@50 -- $ kill -TERM 2596991 00:03:29.878 10:11:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.878 10:11:06 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:29.878 10:11:06 -- pm/common@44 -- $ pid=2597015 00:03:29.878 10:11:06 -- pm/common@50 -- $ sudo -E kill -TERM 2597015 00:03:29.878 10:11:07 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:29.878 10:11:07 -- nvmf/common.sh@7 -- # uname -s 00:03:29.878 10:11:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:29.878 10:11:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:29.878 10:11:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:29.878 10:11:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:29.878 10:11:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:29.878 10:11:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:29.878 10:11:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:29.878 10:11:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:29.878 10:11:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:29.878 10:11:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:29.878 10:11:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:29.878 10:11:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:29.878 10:11:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:29.878 10:11:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:29.878 10:11:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:29.878 10:11:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:29.878 10:11:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:30.142 10:11:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:30.142 10:11:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:30.142 10:11:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:30.142 10:11:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.142 10:11:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.142 10:11:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.142 10:11:07 -- paths/export.sh@5 -- # export PATH 00:03:30.142 10:11:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.142 10:11:07 -- nvmf/common.sh@47 -- # : 0 00:03:30.142 10:11:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:30.142 10:11:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:30.142 10:11:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:30.142 10:11:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:30.142 10:11:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:30.142 10:11:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:30.142 10:11:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:30.142 10:11:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:30.142 10:11:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:30.142 10:11:07 -- spdk/autotest.sh@32 -- # uname -s 00:03:30.142 10:11:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:30.142 10:11:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:30.142 10:11:07 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:30.142 10:11:07 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:30.142 10:11:07 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:30.142 10:11:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:30.142 10:11:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:30.142 10:11:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:30.142 10:11:07 -- spdk/autotest.sh@48 -- # udevadm_pid=2659602 00:03:30.142 10:11:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:30.142 10:11:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:30.142 10:11:07 -- pm/common@17 -- # local monitor 00:03:30.142 10:11:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.142 10:11:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.142 10:11:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.142 10:11:07 -- pm/common@21 -- # date +%s 00:03:30.142 10:11:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.142 10:11:07 -- pm/common@25 -- # sleep 1 00:03:30.142 10:11:07 -- pm/common@21 -- # date +%s 00:03:30.142 10:11:07 -- pm/common@21 -- # date +%s 00:03:30.142 10:11:07 -- pm/common@21 -- # date +%s 00:03:30.142 10:11:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031067 00:03:30.142 10:11:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031067 00:03:30.142 10:11:07 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031067 00:03:30.142 10:11:07 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721031067 00:03:30.142 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031067_collect-vmstat.pm.log 00:03:30.142 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031067_collect-cpu-load.pm.log 00:03:30.142 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031067_collect-cpu-temp.pm.log 00:03:30.142 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721031067_collect-bmc-pm.bmc.pm.log 00:03:31.086 10:11:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:31.086 10:11:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:31.086 10:11:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:31.086 10:11:08 -- common/autotest_common.sh@10 -- # set +x 00:03:31.086 10:11:08 -- spdk/autotest.sh@59 -- # create_test_list 00:03:31.086 10:11:08 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:31.086 10:11:08 -- common/autotest_common.sh@10 -- # set +x 00:03:31.086 10:11:08 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:31.086 10:11:08 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:31.086 10:11:08 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:31.086 10:11:08 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:31.086 10:11:08 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:31.086 10:11:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:31.086 10:11:08 -- common/autotest_common.sh@1455 -- # uname 00:03:31.086 10:11:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:31.086 10:11:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:31.086 10:11:08 -- common/autotest_common.sh@1475 -- # uname 00:03:31.086 10:11:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:31.086 10:11:08 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:31.086 10:11:08 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:31.086 10:11:08 -- spdk/autotest.sh@72 -- # hash lcov 00:03:31.086 10:11:08 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:31.086 10:11:08 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:31.086 --rc lcov_branch_coverage=1 00:03:31.086 --rc lcov_function_coverage=1 00:03:31.086 --rc genhtml_branch_coverage=1 00:03:31.086 --rc genhtml_function_coverage=1 00:03:31.086 --rc genhtml_legend=1 00:03:31.086 --rc geninfo_all_blocks=1 00:03:31.086 ' 00:03:31.086 10:11:08 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:31.086 --rc lcov_branch_coverage=1 00:03:31.086 --rc lcov_function_coverage=1 00:03:31.086 --rc genhtml_branch_coverage=1 00:03:31.086 --rc genhtml_function_coverage=1 00:03:31.086 --rc genhtml_legend=1 00:03:31.086 --rc geninfo_all_blocks=1 00:03:31.086 ' 00:03:31.086 10:11:08 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:31.086 --rc lcov_branch_coverage=1 00:03:31.086 --rc lcov_function_coverage=1 00:03:31.086 --rc genhtml_branch_coverage=1 00:03:31.087 --rc genhtml_function_coverage=1 00:03:31.087 --rc genhtml_legend=1 00:03:31.087 --rc geninfo_all_blocks=1 00:03:31.087 --no-external' 00:03:31.087 10:11:08 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:31.087 --rc lcov_branch_coverage=1 00:03:31.087 --rc lcov_function_coverage=1 00:03:31.087 --rc genhtml_branch_coverage=1 00:03:31.087 --rc genhtml_function_coverage=1 00:03:31.087 --rc genhtml_legend=1 00:03:31.087 --rc geninfo_all_blocks=1 00:03:31.087 --no-external' 00:03:31.087 10:11:08 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:31.087 lcov: LCOV version 1.14 00:03:31.087 10:11:08 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:36.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:36.383 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:36.384 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:36.384 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:54.586 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:54.586 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:59.875 10:11:36 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:59.875 10:11:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:59.875 10:11:36 -- common/autotest_common.sh@10 -- # set +x 00:03:59.875 10:11:36 -- spdk/autotest.sh@91 -- # rm -f 00:03:59.875 10:11:36 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.179 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:03.179 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:03.179 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:03.179 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:03.179 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:03.440 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:03.440 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:03.440 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:03.440 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:03.440 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:03.440 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:03.440 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:03.440 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:03.440 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:03.440 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:03.440 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:03.701 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:03.701 10:11:40 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:03.701 10:11:40 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:03.701 10:11:40 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:03.701 10:11:40 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:03.701 10:11:40 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:03.701 10:11:40 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:03.701 10:11:40 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:03.701 10:11:40 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:03.701 10:11:40 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:03.701 10:11:40 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:03.701 10:11:40 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:03.701 10:11:40 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:03.701 10:11:40 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:03.701 10:11:40 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:03.701 10:11:40 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:03.701 No valid GPT data, bailing 00:04:03.701 10:11:40 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:03.701 10:11:40 -- scripts/common.sh@391 -- # pt= 00:04:03.701 10:11:40 -- scripts/common.sh@392 -- # return 1 00:04:03.701 10:11:40 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:03.701 1+0 records in 00:04:03.701 1+0 records out 00:04:03.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449436 s, 233 MB/s 00:04:03.701 10:11:40 -- spdk/autotest.sh@118 -- # sync 00:04:03.701 10:11:40 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:03.701 10:11:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:03.701 10:11:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:11.843 10:11:48 -- spdk/autotest.sh@124 -- # uname -s 00:04:11.843 10:11:48 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:11.843 10:11:48 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:11.843 10:11:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.843 10:11:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.843 10:11:48 -- common/autotest_common.sh@10 -- # set +x 00:04:11.843 ************************************ 00:04:11.843 START TEST setup.sh 00:04:11.843 ************************************ 00:04:11.843 10:11:48 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:04:11.843 * Looking for test storage... 00:04:11.843 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:11.843 10:11:48 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:11.843 10:11:48 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:11.843 10:11:48 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:11.843 10:11:48 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.843 10:11:48 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.843 10:11:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:11.843 ************************************ 00:04:11.843 START TEST acl 00:04:11.843 ************************************ 00:04:11.843 10:11:48 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:04:11.843 * Looking for test storage... 00:04:11.843 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:11.843 10:11:48 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:11.843 10:11:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:11.843 10:11:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:11.843 10:11:48 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:11.843 10:11:48 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:11.843 10:11:48 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:11.843 10:11:48 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:11.843 10:11:48 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:11.843 10:11:48 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:11.843 10:11:48 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:11.843 10:11:48 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:11.843 10:11:48 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:11.843 10:11:48 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:11.843 10:11:48 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:11.843 10:11:48 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.843 10:11:48 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.051 10:11:53 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:16.051 10:11:53 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:16.051 10:11:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.051 10:11:53 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:16.051 10:11:53 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.051 10:11:53 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:20.259 Hugepages 00:04:20.259 node hugesize free / total 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 00:04:20.259 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:20.259 10:11:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:20.259 10:11:57 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:20.260 10:11:57 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:20.260 10:11:57 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.260 10:11:57 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.260 10:11:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:20.260 ************************************ 00:04:20.260 START TEST denied 00:04:20.260 ************************************ 00:04:20.260 10:11:57 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:20.260 10:11:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:20.260 10:11:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:20.260 10:11:57 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:20.260 10:11:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.260 10:11:57 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:24.467 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:04:24.467 10:12:01 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:04:24.467 10:12:01 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:24.467 10:12:01 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:24.467 10:12:01 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:04:24.467 10:12:01 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:04:24.467 10:12:01 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:24.467 10:12:01 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:24.467 10:12:01 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:24.467 10:12:01 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.467 10:12:01 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.767 00:04:29.767 real 0m8.976s 00:04:29.767 user 0m2.962s 00:04:29.767 sys 0m5.338s 00:04:29.767 10:12:06 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.767 10:12:06 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:29.767 ************************************ 00:04:29.767 END TEST denied 00:04:29.767 ************************************ 00:04:29.767 10:12:06 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:29.767 10:12:06 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:29.767 10:12:06 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.767 10:12:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.767 10:12:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:29.767 ************************************ 00:04:29.767 START TEST allowed 00:04:29.767 ************************************ 00:04:29.767 10:12:06 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:29.767 10:12:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:04:29.767 10:12:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:29.767 10:12:06 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:04:29.767 10:12:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.767 10:12:06 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:35.053 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:35.053 10:12:12 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:35.053 10:12:12 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:35.053 10:12:12 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:35.053 10:12:12 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:35.053 10:12:12 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.284 00:04:39.284 real 0m9.959s 00:04:39.284 user 0m2.983s 00:04:39.284 sys 0m5.294s 00:04:39.284 10:12:16 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.284 10:12:16 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:39.284 ************************************ 00:04:39.284 END TEST allowed 00:04:39.284 ************************************ 00:04:39.284 10:12:16 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:39.284 00:04:39.284 real 0m27.383s 00:04:39.284 user 0m9.014s 00:04:39.284 sys 0m16.237s 00:04:39.284 10:12:16 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.284 10:12:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:39.284 ************************************ 00:04:39.284 END TEST acl 00:04:39.284 ************************************ 00:04:39.284 10:12:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:39.284 10:12:16 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:39.284 10:12:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.284 10:12:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.284 10:12:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.284 ************************************ 00:04:39.284 START TEST hugepages 00:04:39.284 ************************************ 00:04:39.284 10:12:16 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:39.284 * Looking for test storage... 00:04:39.284 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106288896 kB' 'MemAvailable: 110014828 kB' 'Buffers: 4132 kB' 'Cached: 10602156 kB' 'SwapCached: 0 kB' 'Active: 7531624 kB' 'Inactive: 3701232 kB' 'Active(anon): 7040192 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 629900 kB' 'Mapped: 182816 kB' 'Shmem: 6413624 kB' 'KReclaimable: 574212 kB' 'Slab: 1448064 kB' 'SReclaimable: 574212 kB' 'SUnreclaim: 873852 kB' 'KernelStack: 27824 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 8651064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238252 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.284 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:39.285 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.286 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.546 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.546 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.546 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.546 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.546 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:39.546 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:39.546 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:39.546 10:12:16 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:39.546 10:12:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.546 10:12:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.546 10:12:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.546 ************************************ 00:04:39.546 START TEST default_setup 00:04:39.546 ************************************ 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.546 10:12:16 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:43.761 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:43.761 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108455456 kB' 'MemAvailable: 112181268 kB' 'Buffers: 4132 kB' 'Cached: 10602288 kB' 'SwapCached: 0 kB' 'Active: 7551512 kB' 'Inactive: 3701232 kB' 'Active(anon): 7060080 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649684 kB' 'Mapped: 183108 kB' 'Shmem: 6413756 kB' 'KReclaimable: 574092 kB' 'Slab: 1445900 kB' 'SReclaimable: 574092 kB' 'SUnreclaim: 871808 kB' 'KernelStack: 27904 kB' 'PageTables: 9276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8673756 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238284 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.761 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.762 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108455596 kB' 'MemAvailable: 112181408 kB' 'Buffers: 4132 kB' 'Cached: 10602288 kB' 'SwapCached: 0 kB' 'Active: 7552572 kB' 'Inactive: 3701232 kB' 'Active(anon): 7061140 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650772 kB' 'Mapped: 183184 kB' 'Shmem: 6413756 kB' 'KReclaimable: 574092 kB' 'Slab: 1445980 kB' 'SReclaimable: 574092 kB' 'SUnreclaim: 871888 kB' 'KernelStack: 27824 kB' 'PageTables: 9040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8675392 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238316 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.763 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.764 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108456464 kB' 'MemAvailable: 112182276 kB' 'Buffers: 4132 kB' 'Cached: 10602308 kB' 'SwapCached: 0 kB' 'Active: 7550436 kB' 'Inactive: 3701232 kB' 'Active(anon): 7059004 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648932 kB' 'Mapped: 183092 kB' 'Shmem: 6413776 kB' 'KReclaimable: 574092 kB' 'Slab: 1445964 kB' 'SReclaimable: 574092 kB' 'SUnreclaim: 871872 kB' 'KernelStack: 27664 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8673792 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238204 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.765 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.766 nr_hugepages=1024 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.766 resv_hugepages=0 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.766 surplus_hugepages=0 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.766 anon_hugepages=0 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.766 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108456920 kB' 'MemAvailable: 112182732 kB' 'Buffers: 4132 kB' 'Cached: 10602332 kB' 'SwapCached: 0 kB' 'Active: 7550676 kB' 'Inactive: 3701232 kB' 'Active(anon): 7059244 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649136 kB' 'Mapped: 183092 kB' 'Shmem: 6413800 kB' 'KReclaimable: 574092 kB' 'Slab: 1445968 kB' 'SReclaimable: 574092 kB' 'SUnreclaim: 871876 kB' 'KernelStack: 27872 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8673816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238284 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.767 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60080756 kB' 'MemUsed: 5578252 kB' 'SwapCached: 0 kB' 'Active: 1388904 kB' 'Inactive: 288480 kB' 'Active(anon): 1231156 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1549172 kB' 'Mapped: 36204 kB' 'AnonPages: 131664 kB' 'Shmem: 1102944 kB' 'KernelStack: 13176 kB' 'PageTables: 3584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 321016 kB' 'Slab: 736976 kB' 'SReclaimable: 321016 kB' 'SUnreclaim: 415960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.768 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.769 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.770 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.770 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.770 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.770 node0=1024 expecting 1024 00:04:43.770 10:12:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.770 00:04:43.770 real 0m4.144s 00:04:43.770 user 0m1.539s 00:04:43.770 sys 0m2.572s 00:04:43.770 10:12:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.770 10:12:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:43.770 ************************************ 00:04:43.770 END TEST default_setup 00:04:43.770 ************************************ 00:04:43.770 10:12:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:43.770 10:12:20 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:43.770 10:12:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.770 10:12:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.770 10:12:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.770 ************************************ 00:04:43.770 START TEST per_node_1G_alloc 00:04:43.770 ************************************ 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.770 10:12:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:47.982 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:47.982 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108494120 kB' 'MemAvailable: 112219924 kB' 'Buffers: 4132 kB' 'Cached: 10602448 kB' 'SwapCached: 0 kB' 'Active: 7548132 kB' 'Inactive: 3701232 kB' 'Active(anon): 7056700 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645452 kB' 'Mapped: 182044 kB' 'Shmem: 6413916 kB' 'KReclaimable: 574084 kB' 'Slab: 1445452 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871368 kB' 'KernelStack: 27936 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8658848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238540 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.982 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.983 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108498360 kB' 'MemAvailable: 112224164 kB' 'Buffers: 4132 kB' 'Cached: 10602452 kB' 'SwapCached: 0 kB' 'Active: 7547624 kB' 'Inactive: 3701232 kB' 'Active(anon): 7056192 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644944 kB' 'Mapped: 182032 kB' 'Shmem: 6413920 kB' 'KReclaimable: 574084 kB' 'Slab: 1445436 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871352 kB' 'KernelStack: 27792 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8657252 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238492 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.984 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.985 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108499192 kB' 'MemAvailable: 112224996 kB' 'Buffers: 4132 kB' 'Cached: 10602472 kB' 'SwapCached: 0 kB' 'Active: 7547388 kB' 'Inactive: 3701232 kB' 'Active(anon): 7055956 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645224 kB' 'Mapped: 181968 kB' 'Shmem: 6413940 kB' 'KReclaimable: 574084 kB' 'Slab: 1445492 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871408 kB' 'KernelStack: 27920 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8656032 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238460 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.986 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.987 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:47.988 nr_hugepages=1024 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.988 resv_hugepages=0 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.988 surplus_hugepages=0 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.988 anon_hugepages=0 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108502352 kB' 'MemAvailable: 112228156 kB' 'Buffers: 4132 kB' 'Cached: 10602492 kB' 'SwapCached: 0 kB' 'Active: 7546696 kB' 'Inactive: 3701232 kB' 'Active(anon): 7055264 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644552 kB' 'Mapped: 181952 kB' 'Shmem: 6413960 kB' 'KReclaimable: 574084 kB' 'Slab: 1445492 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871408 kB' 'KernelStack: 27808 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8656056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238380 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.988 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.989 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61159316 kB' 'MemUsed: 4499692 kB' 'SwapCached: 0 kB' 'Active: 1386500 kB' 'Inactive: 288480 kB' 'Active(anon): 1228752 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1549300 kB' 'Mapped: 35440 kB' 'AnonPages: 128824 kB' 'Shmem: 1103072 kB' 'KernelStack: 13144 kB' 'PageTables: 3312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 321016 kB' 'Slab: 736560 kB' 'SReclaimable: 321016 kB' 'SUnreclaim: 415544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.990 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47343620 kB' 'MemUsed: 13336220 kB' 'SwapCached: 0 kB' 'Active: 6160612 kB' 'Inactive: 3412752 kB' 'Active(anon): 5826928 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9057348 kB' 'Mapped: 146512 kB' 'AnonPages: 516208 kB' 'Shmem: 5310912 kB' 'KernelStack: 14696 kB' 'PageTables: 5712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 253068 kB' 'Slab: 708932 kB' 'SReclaimable: 253068 kB' 'SUnreclaim: 455864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.991 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:47.992 node0=512 expecting 512 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:47.992 node1=512 expecting 512 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:47.992 00:04:47.992 real 0m4.023s 00:04:47.992 user 0m1.524s 00:04:47.992 sys 0m2.560s 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.992 10:12:24 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.992 ************************************ 00:04:47.992 END TEST per_node_1G_alloc 00:04:47.992 ************************************ 00:04:47.992 10:12:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:47.992 10:12:24 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:47.992 10:12:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.993 10:12:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.993 10:12:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:47.993 ************************************ 00:04:47.993 START TEST even_2G_alloc 00:04:47.993 ************************************ 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.993 10:12:24 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:51.381 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:51.381 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:51.381 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108518140 kB' 'MemAvailable: 112243944 kB' 'Buffers: 4132 kB' 'Cached: 10602632 kB' 'SwapCached: 0 kB' 'Active: 7548260 kB' 'Inactive: 3701232 kB' 'Active(anon): 7056828 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645984 kB' 'Mapped: 181912 kB' 'Shmem: 6414100 kB' 'KReclaimable: 574084 kB' 'Slab: 1445552 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871468 kB' 'KernelStack: 27776 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8656572 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238236 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.680 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.681 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108520356 kB' 'MemAvailable: 112246160 kB' 'Buffers: 4132 kB' 'Cached: 10602632 kB' 'SwapCached: 0 kB' 'Active: 7548164 kB' 'Inactive: 3701232 kB' 'Active(anon): 7056732 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645908 kB' 'Mapped: 181912 kB' 'Shmem: 6414100 kB' 'KReclaimable: 574084 kB' 'Slab: 1445448 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871364 kB' 'KernelStack: 27760 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8656608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238236 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.682 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.683 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.684 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108520388 kB' 'MemAvailable: 112246192 kB' 'Buffers: 4132 kB' 'Cached: 10602656 kB' 'SwapCached: 0 kB' 'Active: 7547492 kB' 'Inactive: 3701232 kB' 'Active(anon): 7056060 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645228 kB' 'Mapped: 181912 kB' 'Shmem: 6414124 kB' 'KReclaimable: 574084 kB' 'Slab: 1445484 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871400 kB' 'KernelStack: 27776 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8656768 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238236 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.685 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.686 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.687 nr_hugepages=1024 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.687 resv_hugepages=0 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.687 surplus_hugepages=0 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.687 anon_hugepages=0 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108520712 kB' 'MemAvailable: 112246516 kB' 'Buffers: 4132 kB' 'Cached: 10602692 kB' 'SwapCached: 0 kB' 'Active: 7547848 kB' 'Inactive: 3701232 kB' 'Active(anon): 7056416 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645572 kB' 'Mapped: 181912 kB' 'Shmem: 6414160 kB' 'KReclaimable: 574084 kB' 'Slab: 1445484 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871400 kB' 'KernelStack: 27792 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8657156 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238236 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.687 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.688 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61167048 kB' 'MemUsed: 4491960 kB' 'SwapCached: 0 kB' 'Active: 1385696 kB' 'Inactive: 288480 kB' 'Active(anon): 1227948 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1549444 kB' 'Mapped: 35456 kB' 'AnonPages: 127960 kB' 'Shmem: 1103216 kB' 'KernelStack: 13144 kB' 'PageTables: 3408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 321016 kB' 'Slab: 736676 kB' 'SReclaimable: 321016 kB' 'SUnreclaim: 415660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.689 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.690 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47353164 kB' 'MemUsed: 13326676 kB' 'SwapCached: 0 kB' 'Active: 6162152 kB' 'Inactive: 3412752 kB' 'Active(anon): 5828468 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9057404 kB' 'Mapped: 146456 kB' 'AnonPages: 517604 kB' 'Shmem: 5310968 kB' 'KernelStack: 14648 kB' 'PageTables: 5408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 253068 kB' 'Slab: 708808 kB' 'SReclaimable: 253068 kB' 'SUnreclaim: 455740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.691 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:51.692 node0=512 expecting 512 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:51.692 node1=512 expecting 512 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:51.692 00:04:51.692 real 0m3.959s 00:04:51.692 user 0m1.574s 00:04:51.692 sys 0m2.442s 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.692 10:12:28 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:51.692 ************************************ 00:04:51.692 END TEST even_2G_alloc 00:04:51.692 ************************************ 00:04:51.692 10:12:28 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:51.692 10:12:28 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:51.692 10:12:28 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.692 10:12:28 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.692 10:12:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:51.692 ************************************ 00:04:51.692 START TEST odd_alloc 00:04:51.692 ************************************ 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.692 10:12:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:55.905 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:55.905 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.905 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108536924 kB' 'MemAvailable: 112262728 kB' 'Buffers: 4132 kB' 'Cached: 10602828 kB' 'SwapCached: 0 kB' 'Active: 7550792 kB' 'Inactive: 3701232 kB' 'Active(anon): 7059360 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648248 kB' 'Mapped: 181976 kB' 'Shmem: 6414296 kB' 'KReclaimable: 574084 kB' 'Slab: 1445300 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871216 kB' 'KernelStack: 27968 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8660780 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238460 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.906 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108538296 kB' 'MemAvailable: 112264100 kB' 'Buffers: 4132 kB' 'Cached: 10602828 kB' 'SwapCached: 0 kB' 'Active: 7552144 kB' 'Inactive: 3701232 kB' 'Active(anon): 7060712 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649280 kB' 'Mapped: 182052 kB' 'Shmem: 6414296 kB' 'KReclaimable: 574084 kB' 'Slab: 1445388 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871304 kB' 'KernelStack: 28064 kB' 'PageTables: 9616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8659184 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238476 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.907 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.908 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108540556 kB' 'MemAvailable: 112266360 kB' 'Buffers: 4132 kB' 'Cached: 10602844 kB' 'SwapCached: 0 kB' 'Active: 7550776 kB' 'Inactive: 3701232 kB' 'Active(anon): 7059344 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647908 kB' 'Mapped: 182036 kB' 'Shmem: 6414312 kB' 'KReclaimable: 574084 kB' 'Slab: 1445388 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871304 kB' 'KernelStack: 27840 kB' 'PageTables: 9096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8660816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238412 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.909 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:55.910 nr_hugepages=1025 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.910 resv_hugepages=0 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.910 surplus_hugepages=0 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.910 anon_hugepages=0 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.910 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108540840 kB' 'MemAvailable: 112266644 kB' 'Buffers: 4132 kB' 'Cached: 10602852 kB' 'SwapCached: 0 kB' 'Active: 7550916 kB' 'Inactive: 3701232 kB' 'Active(anon): 7059484 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648228 kB' 'Mapped: 181968 kB' 'Shmem: 6414320 kB' 'KReclaimable: 574084 kB' 'Slab: 1445408 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871324 kB' 'KernelStack: 28016 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8660836 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238492 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.911 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61187652 kB' 'MemUsed: 4471356 kB' 'SwapCached: 0 kB' 'Active: 1389408 kB' 'Inactive: 288480 kB' 'Active(anon): 1231660 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1549616 kB' 'Mapped: 35984 kB' 'AnonPages: 131432 kB' 'Shmem: 1103388 kB' 'KernelStack: 13112 kB' 'PageTables: 3056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 321016 kB' 'Slab: 736684 kB' 'SReclaimable: 321016 kB' 'SUnreclaim: 415668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.912 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.913 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47346776 kB' 'MemUsed: 13333064 kB' 'SwapCached: 0 kB' 'Active: 6166076 kB' 'Inactive: 3412752 kB' 'Active(anon): 5832392 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9057412 kB' 'Mapped: 146488 kB' 'AnonPages: 521756 kB' 'Shmem: 5310976 kB' 'KernelStack: 14856 kB' 'PageTables: 5628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 253068 kB' 'Slab: 708724 kB' 'SReclaimable: 253068 kB' 'SUnreclaim: 455656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.914 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:55.915 node0=512 expecting 513 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:55.915 node1=513 expecting 512 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:55.915 00:04:55.915 real 0m4.009s 00:04:55.915 user 0m1.573s 00:04:55.915 sys 0m2.497s 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.915 10:12:32 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:55.915 ************************************ 00:04:55.915 END TEST odd_alloc 00:04:55.915 ************************************ 00:04:55.915 10:12:32 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:55.915 10:12:32 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:55.915 10:12:32 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.915 10:12:32 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.915 10:12:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:55.915 ************************************ 00:04:55.915 START TEST custom_alloc 00:04:55.915 ************************************ 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.915 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.916 10:12:32 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:00.131 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:00.131 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107481944 kB' 'MemAvailable: 111207748 kB' 'Buffers: 4132 kB' 'Cached: 10603000 kB' 'SwapCached: 0 kB' 'Active: 7551560 kB' 'Inactive: 3701232 kB' 'Active(anon): 7060128 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648816 kB' 'Mapped: 182000 kB' 'Shmem: 6414468 kB' 'KReclaimable: 574084 kB' 'Slab: 1445264 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871180 kB' 'KernelStack: 27776 kB' 'PageTables: 9324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8658872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238460 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.131 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.132 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107481952 kB' 'MemAvailable: 111207756 kB' 'Buffers: 4132 kB' 'Cached: 10603004 kB' 'SwapCached: 0 kB' 'Active: 7551668 kB' 'Inactive: 3701232 kB' 'Active(anon): 7060236 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649016 kB' 'Mapped: 181984 kB' 'Shmem: 6414472 kB' 'KReclaimable: 574084 kB' 'Slab: 1445284 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871200 kB' 'KernelStack: 27808 kB' 'PageTables: 9648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8658888 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238428 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.133 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107481952 kB' 'MemAvailable: 111207756 kB' 'Buffers: 4132 kB' 'Cached: 10603004 kB' 'SwapCached: 0 kB' 'Active: 7551668 kB' 'Inactive: 3701232 kB' 'Active(anon): 7060236 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649016 kB' 'Mapped: 181984 kB' 'Shmem: 6414472 kB' 'KReclaimable: 574084 kB' 'Slab: 1445284 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871200 kB' 'KernelStack: 27808 kB' 'PageTables: 9648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8658908 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238444 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.134 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.135 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:05:00.136 nr_hugepages=1536 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.136 resv_hugepages=0 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.136 surplus_hugepages=0 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.136 anon_hugepages=0 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107482912 kB' 'MemAvailable: 111208716 kB' 'Buffers: 4132 kB' 'Cached: 10603044 kB' 'SwapCached: 0 kB' 'Active: 7551748 kB' 'Inactive: 3701232 kB' 'Active(anon): 7060316 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649012 kB' 'Mapped: 181984 kB' 'Shmem: 6414512 kB' 'KReclaimable: 574084 kB' 'Slab: 1445284 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871200 kB' 'KernelStack: 27808 kB' 'PageTables: 9648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8658932 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238444 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.136 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.137 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 61164812 kB' 'MemUsed: 4494196 kB' 'SwapCached: 0 kB' 'Active: 1387964 kB' 'Inactive: 288480 kB' 'Active(anon): 1230216 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1549676 kB' 'Mapped: 35496 kB' 'AnonPages: 129872 kB' 'Shmem: 1103448 kB' 'KernelStack: 13112 kB' 'PageTables: 3308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 321016 kB' 'Slab: 736628 kB' 'SReclaimable: 321016 kB' 'SUnreclaim: 415612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.138 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 46317596 kB' 'MemUsed: 14362244 kB' 'SwapCached: 0 kB' 'Active: 6163792 kB' 'Inactive: 3412752 kB' 'Active(anon): 5830108 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9057544 kB' 'Mapped: 146488 kB' 'AnonPages: 519148 kB' 'Shmem: 5311108 kB' 'KernelStack: 14696 kB' 'PageTables: 6340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 253068 kB' 'Slab: 708656 kB' 'SReclaimable: 253068 kB' 'SUnreclaim: 455588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.139 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:00.140 node0=512 expecting 512 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.140 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.141 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:05:00.141 node1=1024 expecting 1024 00:05:00.141 10:12:36 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:05:00.141 00:05:00.141 real 0m3.982s 00:05:00.141 user 0m1.538s 00:05:00.141 sys 0m2.509s 00:05:00.141 10:12:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.141 10:12:36 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:00.141 ************************************ 00:05:00.141 END TEST custom_alloc 00:05:00.141 ************************************ 00:05:00.141 10:12:36 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:00.141 10:12:36 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:00.141 10:12:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.141 10:12:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.141 10:12:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.141 ************************************ 00:05:00.141 START TEST no_shrink_alloc 00:05:00.141 ************************************ 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.141 10:12:36 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:04.358 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:04.358 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:04.358 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:04.359 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:04.359 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108464952 kB' 'MemAvailable: 112190756 kB' 'Buffers: 4132 kB' 'Cached: 10603192 kB' 'SwapCached: 0 kB' 'Active: 7552272 kB' 'Inactive: 3701232 kB' 'Active(anon): 7060840 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649588 kB' 'Mapped: 181932 kB' 'Shmem: 6414660 kB' 'KReclaimable: 574084 kB' 'Slab: 1445960 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871876 kB' 'KernelStack: 27840 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8659616 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238364 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.359 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108466940 kB' 'MemAvailable: 112192744 kB' 'Buffers: 4132 kB' 'Cached: 10603196 kB' 'SwapCached: 0 kB' 'Active: 7552452 kB' 'Inactive: 3701232 kB' 'Active(anon): 7061020 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649324 kB' 'Mapped: 182068 kB' 'Shmem: 6414664 kB' 'KReclaimable: 574084 kB' 'Slab: 1446032 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871948 kB' 'KernelStack: 27808 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8659636 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238300 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.360 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.361 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108467332 kB' 'MemAvailable: 112193136 kB' 'Buffers: 4132 kB' 'Cached: 10603212 kB' 'SwapCached: 0 kB' 'Active: 7551484 kB' 'Inactive: 3701232 kB' 'Active(anon): 7060052 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648740 kB' 'Mapped: 181984 kB' 'Shmem: 6414680 kB' 'KReclaimable: 574084 kB' 'Slab: 1446008 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871924 kB' 'KernelStack: 27776 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8659656 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238268 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.362 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.363 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:04.364 nr_hugepages=1024 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:04.364 resv_hugepages=0 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:04.364 surplus_hugepages=0 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:04.364 anon_hugepages=0 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108467764 kB' 'MemAvailable: 112193568 kB' 'Buffers: 4132 kB' 'Cached: 10603236 kB' 'SwapCached: 0 kB' 'Active: 7551412 kB' 'Inactive: 3701232 kB' 'Active(anon): 7059980 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 648672 kB' 'Mapped: 181984 kB' 'Shmem: 6414704 kB' 'KReclaimable: 574084 kB' 'Slab: 1446008 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871924 kB' 'KernelStack: 27792 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8659680 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238268 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.364 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.365 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60107652 kB' 'MemUsed: 5551356 kB' 'SwapCached: 0 kB' 'Active: 1387352 kB' 'Inactive: 288480 kB' 'Active(anon): 1229604 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1549768 kB' 'Mapped: 35516 kB' 'AnonPages: 129328 kB' 'Shmem: 1103540 kB' 'KernelStack: 13176 kB' 'PageTables: 3440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 321016 kB' 'Slab: 737236 kB' 'SReclaimable: 321016 kB' 'SUnreclaim: 416220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.366 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:04.367 node0=1024 expecting 1024 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.367 10:12:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:07.665 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:05:07.665 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:05:07.665 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108464288 kB' 'MemAvailable: 112190092 kB' 'Buffers: 4132 kB' 'Cached: 10603352 kB' 'SwapCached: 0 kB' 'Active: 7553388 kB' 'Inactive: 3701232 kB' 'Active(anon): 7061956 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649864 kB' 'Mapped: 182508 kB' 'Shmem: 6414820 kB' 'KReclaimable: 574084 kB' 'Slab: 1445540 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871456 kB' 'KernelStack: 27856 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8664968 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238620 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.665 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.666 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108465288 kB' 'MemAvailable: 112191092 kB' 'Buffers: 4132 kB' 'Cached: 10603352 kB' 'SwapCached: 0 kB' 'Active: 7558676 kB' 'Inactive: 3701232 kB' 'Active(anon): 7067244 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 654980 kB' 'Mapped: 182600 kB' 'Shmem: 6414820 kB' 'KReclaimable: 574084 kB' 'Slab: 1445652 kB' 'SReclaimable: 574084 kB' 'SUnreclaim: 871568 kB' 'KernelStack: 27888 kB' 'PageTables: 9080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8667884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238540 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.667 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.932 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108466020 kB' 'MemAvailable: 112191816 kB' 'Buffers: 4132 kB' 'Cached: 10603372 kB' 'SwapCached: 0 kB' 'Active: 7558852 kB' 'Inactive: 3701232 kB' 'Active(anon): 7067420 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 655396 kB' 'Mapped: 182924 kB' 'Shmem: 6414840 kB' 'KReclaimable: 574076 kB' 'Slab: 1445688 kB' 'SReclaimable: 574076 kB' 'SUnreclaim: 871612 kB' 'KernelStack: 27888 kB' 'PageTables: 9252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8669640 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238576 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.933 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:07.934 nr_hugepages=1024 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:07.934 resv_hugepages=0 00:05:07.934 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:07.934 surplus_hugepages=0 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:07.935 anon_hugepages=0 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108467396 kB' 'MemAvailable: 112193192 kB' 'Buffers: 4132 kB' 'Cached: 10603396 kB' 'SwapCached: 0 kB' 'Active: 7552816 kB' 'Inactive: 3701232 kB' 'Active(anon): 7061384 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649808 kB' 'Mapped: 182344 kB' 'Shmem: 6414864 kB' 'KReclaimable: 574076 kB' 'Slab: 1445672 kB' 'SReclaimable: 574076 kB' 'SUnreclaim: 871596 kB' 'KernelStack: 27904 kB' 'PageTables: 9024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8663544 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 238636 kB' 'VmallocChunk: 0 kB' 'Percpu: 141696 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4058484 kB' 'DirectMap2M: 57487360 kB' 'DirectMap1G: 74448896 kB' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.935 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:07.936 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60118012 kB' 'MemUsed: 5540996 kB' 'SwapCached: 0 kB' 'Active: 1387940 kB' 'Inactive: 288480 kB' 'Active(anon): 1230192 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1549888 kB' 'Mapped: 35528 kB' 'AnonPages: 129728 kB' 'Shmem: 1103660 kB' 'KernelStack: 13096 kB' 'PageTables: 3196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 321008 kB' 'Slab: 737064 kB' 'SReclaimable: 321008 kB' 'SUnreclaim: 416056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.937 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:07.938 node0=1024 expecting 1024 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:07.938 00:05:07.938 real 0m7.958s 00:05:07.938 user 0m3.124s 00:05:07.938 sys 0m4.962s 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.938 10:12:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:07.938 ************************************ 00:05:07.938 END TEST no_shrink_alloc 00:05:07.938 ************************************ 00:05:07.938 10:12:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:07.938 10:12:44 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:07.938 00:05:07.938 real 0m28.653s 00:05:07.938 user 0m11.114s 00:05:07.938 sys 0m17.912s 00:05:07.938 10:12:44 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.938 10:12:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:07.938 ************************************ 00:05:07.938 END TEST hugepages 00:05:07.938 ************************************ 00:05:07.938 10:12:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:07.938 10:12:45 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:07.938 10:12:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.938 10:12:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.938 10:12:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:07.938 ************************************ 00:05:07.938 START TEST driver 00:05:07.938 ************************************ 00:05:07.938 10:12:45 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:05:08.200 * Looking for test storage... 00:05:08.200 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:08.200 10:12:45 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:08.200 10:12:45 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.200 10:12:45 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.494 10:12:50 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:13.494 10:12:50 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.494 10:12:50 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.494 10:12:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:13.494 ************************************ 00:05:13.494 START TEST guess_driver 00:05:13.494 ************************************ 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:13.494 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:13.494 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:13.494 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:13.494 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:13.494 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:13.494 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:13.494 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:13.494 Looking for driver=vfio-pci 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.494 10:12:50 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:17.695 10:12:53 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:53 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:53 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.695 10:12:54 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:22.975 00:05:22.975 real 0m9.100s 00:05:22.975 user 0m3.018s 00:05:22.975 sys 0m5.308s 00:05:22.975 10:12:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.975 10:12:59 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:22.975 ************************************ 00:05:22.975 END TEST guess_driver 00:05:22.975 ************************************ 00:05:22.975 10:12:59 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:22.975 00:05:22.975 real 0m14.265s 00:05:22.975 user 0m4.543s 00:05:22.975 sys 0m8.182s 00:05:22.975 10:12:59 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.975 10:12:59 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:22.975 ************************************ 00:05:22.975 END TEST driver 00:05:22.975 ************************************ 00:05:22.975 10:12:59 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:22.975 10:12:59 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:22.975 10:12:59 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.975 10:12:59 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.975 10:12:59 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:22.975 ************************************ 00:05:22.975 START TEST devices 00:05:22.975 ************************************ 00:05:22.975 10:12:59 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:22.975 * Looking for test storage... 00:05:22.975 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:22.975 10:12:59 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:22.975 10:12:59 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:22.976 10:12:59 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.976 10:12:59 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:27.174 10:13:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:27.174 10:13:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:27.174 10:13:03 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:27.174 10:13:03 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:27.174 10:13:03 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:27.174 10:13:03 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:27.174 10:13:03 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:27.174 10:13:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:27.174 10:13:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:27.174 10:13:03 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:27.174 No valid GPT data, bailing 00:05:27.174 10:13:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:27.174 10:13:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:27.174 10:13:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:27.174 10:13:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:27.174 10:13:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:27.174 10:13:03 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:27.174 10:13:03 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:27.174 10:13:03 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.174 10:13:03 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.174 10:13:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:27.174 ************************************ 00:05:27.174 START TEST nvme_mount 00:05:27.174 ************************************ 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:27.174 10:13:03 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:27.744 Creating new GPT entries in memory. 00:05:27.744 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:27.744 other utilities. 00:05:27.744 10:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:27.744 10:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.744 10:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:27.744 10:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:27.744 10:13:04 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:28.684 Creating new GPT entries in memory. 00:05:28.684 The operation has completed successfully. 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2703082 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.684 10:13:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.890 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:32.891 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:32.891 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:32.891 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:32.891 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:32.891 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.891 10:13:09 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.270 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:36.549 10:13:13 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.550 10:13:13 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:40.769 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:40.770 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:40.770 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:40.770 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:40.770 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:40.770 10:13:17 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:40.770 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:40.770 00:05:40.770 real 0m13.791s 00:05:40.770 user 0m4.369s 00:05:40.770 sys 0m7.335s 00:05:40.770 10:13:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.770 10:13:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:40.770 ************************************ 00:05:40.770 END TEST nvme_mount 00:05:40.770 ************************************ 00:05:40.770 10:13:17 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:40.770 10:13:17 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:40.770 10:13:17 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.770 10:13:17 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.770 10:13:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:40.770 ************************************ 00:05:40.770 START TEST dm_mount 00:05:40.770 ************************************ 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:40.770 10:13:17 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:41.709 Creating new GPT entries in memory. 00:05:41.709 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:41.709 other utilities. 00:05:41.709 10:13:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:41.709 10:13:18 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:41.709 10:13:18 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:41.709 10:13:18 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:41.709 10:13:18 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:42.650 Creating new GPT entries in memory. 00:05:42.650 The operation has completed successfully. 00:05:42.650 10:13:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:42.650 10:13:19 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:42.650 10:13:19 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:42.650 10:13:19 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:42.650 10:13:19 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:43.589 The operation has completed successfully. 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2708691 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:43.589 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:43.590 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:43.590 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:43.590 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:43.590 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:43.590 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:43.590 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:43.590 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:43.590 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:43.590 10:13:20 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:43.590 10:13:20 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:43.590 10:13:20 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.794 10:13:24 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:51.092 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:51.352 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:51.352 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:51.352 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.352 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:51.352 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:51.352 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:51.352 10:13:28 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:51.352 00:05:51.352 real 0m10.763s 00:05:51.352 user 0m2.857s 00:05:51.352 sys 0m4.990s 00:05:51.352 10:13:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.352 10:13:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:51.352 ************************************ 00:05:51.352 END TEST dm_mount 00:05:51.352 ************************************ 00:05:51.352 10:13:28 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:51.352 10:13:28 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:51.352 10:13:28 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:51.352 10:13:28 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:51.352 10:13:28 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.352 10:13:28 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:51.352 10:13:28 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:51.352 10:13:28 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:51.612 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:51.612 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:51.612 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:51.612 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:51.612 10:13:28 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:51.612 10:13:28 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:51.612 10:13:28 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:51.612 10:13:28 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:51.612 10:13:28 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:51.612 10:13:28 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:51.612 10:13:28 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:51.612 00:05:51.612 real 0m29.245s 00:05:51.612 user 0m8.823s 00:05:51.612 sys 0m15.287s 00:05:51.612 10:13:28 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.612 10:13:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:51.612 ************************************ 00:05:51.612 END TEST devices 00:05:51.612 ************************************ 00:05:51.612 10:13:28 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:51.612 00:05:51.612 real 1m39.969s 00:05:51.612 user 0m33.642s 00:05:51.612 sys 0m57.920s 00:05:51.612 10:13:28 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.612 10:13:28 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:51.612 ************************************ 00:05:51.612 END TEST setup.sh 00:05:51.612 ************************************ 00:05:51.612 10:13:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.612 10:13:28 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:55.814 Hugepages 00:05:55.814 node hugesize free / total 00:05:55.814 node0 1048576kB 0 / 0 00:05:55.814 node0 2048kB 2048 / 2048 00:05:55.814 node1 1048576kB 0 / 0 00:05:55.814 node1 2048kB 0 / 0 00:05:55.814 00:05:55.814 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:55.814 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:55.814 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:55.814 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:55.814 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:55.814 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:55.814 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:55.814 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:55.814 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:55.814 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:55.814 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:55.814 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:55.814 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:55.814 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:55.814 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:55.814 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:55.814 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:55.814 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:55.814 10:13:32 -- spdk/autotest.sh@130 -- # uname -s 00:05:55.814 10:13:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:55.814 10:13:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:55.814 10:13:32 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:00.019 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:00.019 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:01.401 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:01.401 10:13:38 -- common/autotest_common.sh@1532 -- # sleep 1 00:06:02.340 10:13:39 -- common/autotest_common.sh@1533 -- # bdfs=() 00:06:02.340 10:13:39 -- common/autotest_common.sh@1533 -- # local bdfs 00:06:02.340 10:13:39 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:06:02.340 10:13:39 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:06:02.340 10:13:39 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:02.340 10:13:39 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:02.340 10:13:39 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:02.340 10:13:39 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:02.340 10:13:39 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:02.340 10:13:39 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:02.340 10:13:39 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:06:02.341 10:13:39 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:06:06.550 Waiting for block devices as requested 00:06:06.550 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:06.550 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:06.550 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:06.550 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:06.550 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:06.550 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:06.550 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:06.550 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:06.812 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:06:06.812 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:06:06.812 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:06:07.072 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:06:07.072 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:06:07.072 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:06:07.072 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:06:07.333 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:06:07.333 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:06:07.333 10:13:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:06:07.333 10:13:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:06:07.333 10:13:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:06:07.333 10:13:44 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:06:07.333 10:13:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:07.333 10:13:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:06:07.333 10:13:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:06:07.333 10:13:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:06:07.333 10:13:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:06:07.333 10:13:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:06:07.333 10:13:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:06:07.333 10:13:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:06:07.333 10:13:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:06:07.333 10:13:44 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:06:07.333 10:13:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:06:07.333 10:13:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:06:07.333 10:13:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:06:07.333 10:13:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:06:07.333 10:13:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:06:07.333 10:13:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:06:07.333 10:13:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:06:07.333 10:13:44 -- common/autotest_common.sh@1557 -- # continue 00:06:07.333 10:13:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:07.333 10:13:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.333 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:06:07.333 10:13:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:07.333 10:13:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.333 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:06:07.333 10:13:44 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:06:11.534 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:11.534 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:11.534 10:13:48 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:11.534 10:13:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:11.534 10:13:48 -- common/autotest_common.sh@10 -- # set +x 00:06:11.534 10:13:48 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:11.534 10:13:48 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:11.534 10:13:48 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:11.534 10:13:48 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:11.535 10:13:48 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:11.535 10:13:48 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:11.535 10:13:48 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:11.535 10:13:48 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:11.535 10:13:48 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:11.535 10:13:48 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:11.535 10:13:48 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:11.535 10:13:48 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:11.535 10:13:48 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:06:11.535 10:13:48 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:11.535 10:13:48 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:11.535 10:13:48 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:06:11.535 10:13:48 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:11.535 10:13:48 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:11.535 10:13:48 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:11.535 10:13:48 -- common/autotest_common.sh@1593 -- # return 0 00:06:11.535 10:13:48 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:11.535 10:13:48 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:11.535 10:13:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:11.535 10:13:48 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:11.535 10:13:48 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:11.535 10:13:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.535 10:13:48 -- common/autotest_common.sh@10 -- # set +x 00:06:11.535 10:13:48 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:11.535 10:13:48 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:11.535 10:13:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.535 10:13:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.535 10:13:48 -- common/autotest_common.sh@10 -- # set +x 00:06:11.796 ************************************ 00:06:11.796 START TEST env 00:06:11.796 ************************************ 00:06:11.796 10:13:48 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:11.796 * Looking for test storage... 00:06:11.796 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:06:11.796 10:13:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:11.796 10:13:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.796 10:13:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.796 10:13:48 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.796 ************************************ 00:06:11.796 START TEST env_memory 00:06:11.796 ************************************ 00:06:11.796 10:13:48 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:11.796 00:06:11.796 00:06:11.796 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.796 http://cunit.sourceforge.net/ 00:06:11.796 00:06:11.796 00:06:11.796 Suite: memory 00:06:11.796 Test: alloc and free memory map ...[2024-07-15 10:13:48.950206] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:11.796 passed 00:06:11.796 Test: mem map translation ...[2024-07-15 10:13:48.975886] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:11.796 [2024-07-15 10:13:48.975917] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:11.796 [2024-07-15 10:13:48.975965] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:11.796 [2024-07-15 10:13:48.975972] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:12.057 passed 00:06:12.058 Test: mem map registration ...[2024-07-15 10:13:49.031393] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:12.058 [2024-07-15 10:13:49.031430] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:12.058 passed 00:06:12.058 Test: mem map adjacent registrations ...passed 00:06:12.058 00:06:12.058 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.058 suites 1 1 n/a 0 0 00:06:12.058 tests 4 4 4 0 0 00:06:12.058 asserts 152 152 152 0 n/a 00:06:12.058 00:06:12.058 Elapsed time = 0.195 seconds 00:06:12.058 00:06:12.058 real 0m0.210s 00:06:12.058 user 0m0.197s 00:06:12.058 sys 0m0.011s 00:06:12.058 10:13:49 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.058 10:13:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:12.058 ************************************ 00:06:12.058 END TEST env_memory 00:06:12.058 ************************************ 00:06:12.058 10:13:49 env -- common/autotest_common.sh@1142 -- # return 0 00:06:12.058 10:13:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:12.058 10:13:49 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.058 10:13:49 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.058 10:13:49 env -- common/autotest_common.sh@10 -- # set +x 00:06:12.058 ************************************ 00:06:12.058 START TEST env_vtophys 00:06:12.058 ************************************ 00:06:12.058 10:13:49 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:12.058 EAL: lib.eal log level changed from notice to debug 00:06:12.058 EAL: Detected lcore 0 as core 0 on socket 0 00:06:12.058 EAL: Detected lcore 1 as core 1 on socket 0 00:06:12.058 EAL: Detected lcore 2 as core 2 on socket 0 00:06:12.058 EAL: Detected lcore 3 as core 3 on socket 0 00:06:12.058 EAL: Detected lcore 4 as core 4 on socket 0 00:06:12.058 EAL: Detected lcore 5 as core 5 on socket 0 00:06:12.058 EAL: Detected lcore 6 as core 6 on socket 0 00:06:12.058 EAL: Detected lcore 7 as core 7 on socket 0 00:06:12.058 EAL: Detected lcore 8 as core 8 on socket 0 00:06:12.058 EAL: Detected lcore 9 as core 9 on socket 0 00:06:12.058 EAL: Detected lcore 10 as core 10 on socket 0 00:06:12.058 EAL: Detected lcore 11 as core 11 on socket 0 00:06:12.058 EAL: Detected lcore 12 as core 12 on socket 0 00:06:12.058 EAL: Detected lcore 13 as core 13 on socket 0 00:06:12.058 EAL: Detected lcore 14 as core 14 on socket 0 00:06:12.058 EAL: Detected lcore 15 as core 15 on socket 0 00:06:12.058 EAL: Detected lcore 16 as core 16 on socket 0 00:06:12.058 EAL: Detected lcore 17 as core 17 on socket 0 00:06:12.058 EAL: Detected lcore 18 as core 18 on socket 0 00:06:12.058 EAL: Detected lcore 19 as core 19 on socket 0 00:06:12.058 EAL: Detected lcore 20 as core 20 on socket 0 00:06:12.058 EAL: Detected lcore 21 as core 21 on socket 0 00:06:12.058 EAL: Detected lcore 22 as core 22 on socket 0 00:06:12.058 EAL: Detected lcore 23 as core 23 on socket 0 00:06:12.058 EAL: Detected lcore 24 as core 24 on socket 0 00:06:12.058 EAL: Detected lcore 25 as core 25 on socket 0 00:06:12.058 EAL: Detected lcore 26 as core 26 on socket 0 00:06:12.058 EAL: Detected lcore 27 as core 27 on socket 0 00:06:12.058 EAL: Detected lcore 28 as core 28 on socket 0 00:06:12.058 EAL: Detected lcore 29 as core 29 on socket 0 00:06:12.058 EAL: Detected lcore 30 as core 30 on socket 0 00:06:12.058 EAL: Detected lcore 31 as core 31 on socket 0 00:06:12.058 EAL: Detected lcore 32 as core 32 on socket 0 00:06:12.058 EAL: Detected lcore 33 as core 33 on socket 0 00:06:12.058 EAL: Detected lcore 34 as core 34 on socket 0 00:06:12.058 EAL: Detected lcore 35 as core 35 on socket 0 00:06:12.058 EAL: Detected lcore 36 as core 0 on socket 1 00:06:12.058 EAL: Detected lcore 37 as core 1 on socket 1 00:06:12.058 EAL: Detected lcore 38 as core 2 on socket 1 00:06:12.058 EAL: Detected lcore 39 as core 3 on socket 1 00:06:12.058 EAL: Detected lcore 40 as core 4 on socket 1 00:06:12.058 EAL: Detected lcore 41 as core 5 on socket 1 00:06:12.058 EAL: Detected lcore 42 as core 6 on socket 1 00:06:12.058 EAL: Detected lcore 43 as core 7 on socket 1 00:06:12.058 EAL: Detected lcore 44 as core 8 on socket 1 00:06:12.058 EAL: Detected lcore 45 as core 9 on socket 1 00:06:12.058 EAL: Detected lcore 46 as core 10 on socket 1 00:06:12.058 EAL: Detected lcore 47 as core 11 on socket 1 00:06:12.058 EAL: Detected lcore 48 as core 12 on socket 1 00:06:12.058 EAL: Detected lcore 49 as core 13 on socket 1 00:06:12.058 EAL: Detected lcore 50 as core 14 on socket 1 00:06:12.058 EAL: Detected lcore 51 as core 15 on socket 1 00:06:12.058 EAL: Detected lcore 52 as core 16 on socket 1 00:06:12.058 EAL: Detected lcore 53 as core 17 on socket 1 00:06:12.058 EAL: Detected lcore 54 as core 18 on socket 1 00:06:12.058 EAL: Detected lcore 55 as core 19 on socket 1 00:06:12.058 EAL: Detected lcore 56 as core 20 on socket 1 00:06:12.058 EAL: Detected lcore 57 as core 21 on socket 1 00:06:12.058 EAL: Detected lcore 58 as core 22 on socket 1 00:06:12.058 EAL: Detected lcore 59 as core 23 on socket 1 00:06:12.058 EAL: Detected lcore 60 as core 24 on socket 1 00:06:12.058 EAL: Detected lcore 61 as core 25 on socket 1 00:06:12.058 EAL: Detected lcore 62 as core 26 on socket 1 00:06:12.058 EAL: Detected lcore 63 as core 27 on socket 1 00:06:12.058 EAL: Detected lcore 64 as core 28 on socket 1 00:06:12.058 EAL: Detected lcore 65 as core 29 on socket 1 00:06:12.058 EAL: Detected lcore 66 as core 30 on socket 1 00:06:12.058 EAL: Detected lcore 67 as core 31 on socket 1 00:06:12.058 EAL: Detected lcore 68 as core 32 on socket 1 00:06:12.058 EAL: Detected lcore 69 as core 33 on socket 1 00:06:12.058 EAL: Detected lcore 70 as core 34 on socket 1 00:06:12.058 EAL: Detected lcore 71 as core 35 on socket 1 00:06:12.058 EAL: Detected lcore 72 as core 0 on socket 0 00:06:12.058 EAL: Detected lcore 73 as core 1 on socket 0 00:06:12.058 EAL: Detected lcore 74 as core 2 on socket 0 00:06:12.058 EAL: Detected lcore 75 as core 3 on socket 0 00:06:12.058 EAL: Detected lcore 76 as core 4 on socket 0 00:06:12.058 EAL: Detected lcore 77 as core 5 on socket 0 00:06:12.058 EAL: Detected lcore 78 as core 6 on socket 0 00:06:12.058 EAL: Detected lcore 79 as core 7 on socket 0 00:06:12.058 EAL: Detected lcore 80 as core 8 on socket 0 00:06:12.058 EAL: Detected lcore 81 as core 9 on socket 0 00:06:12.058 EAL: Detected lcore 82 as core 10 on socket 0 00:06:12.058 EAL: Detected lcore 83 as core 11 on socket 0 00:06:12.058 EAL: Detected lcore 84 as core 12 on socket 0 00:06:12.058 EAL: Detected lcore 85 as core 13 on socket 0 00:06:12.058 EAL: Detected lcore 86 as core 14 on socket 0 00:06:12.058 EAL: Detected lcore 87 as core 15 on socket 0 00:06:12.058 EAL: Detected lcore 88 as core 16 on socket 0 00:06:12.058 EAL: Detected lcore 89 as core 17 on socket 0 00:06:12.058 EAL: Detected lcore 90 as core 18 on socket 0 00:06:12.058 EAL: Detected lcore 91 as core 19 on socket 0 00:06:12.058 EAL: Detected lcore 92 as core 20 on socket 0 00:06:12.058 EAL: Detected lcore 93 as core 21 on socket 0 00:06:12.058 EAL: Detected lcore 94 as core 22 on socket 0 00:06:12.058 EAL: Detected lcore 95 as core 23 on socket 0 00:06:12.058 EAL: Detected lcore 96 as core 24 on socket 0 00:06:12.058 EAL: Detected lcore 97 as core 25 on socket 0 00:06:12.058 EAL: Detected lcore 98 as core 26 on socket 0 00:06:12.058 EAL: Detected lcore 99 as core 27 on socket 0 00:06:12.058 EAL: Detected lcore 100 as core 28 on socket 0 00:06:12.058 EAL: Detected lcore 101 as core 29 on socket 0 00:06:12.058 EAL: Detected lcore 102 as core 30 on socket 0 00:06:12.058 EAL: Detected lcore 103 as core 31 on socket 0 00:06:12.058 EAL: Detected lcore 104 as core 32 on socket 0 00:06:12.058 EAL: Detected lcore 105 as core 33 on socket 0 00:06:12.058 EAL: Detected lcore 106 as core 34 on socket 0 00:06:12.058 EAL: Detected lcore 107 as core 35 on socket 0 00:06:12.058 EAL: Detected lcore 108 as core 0 on socket 1 00:06:12.058 EAL: Detected lcore 109 as core 1 on socket 1 00:06:12.058 EAL: Detected lcore 110 as core 2 on socket 1 00:06:12.058 EAL: Detected lcore 111 as core 3 on socket 1 00:06:12.058 EAL: Detected lcore 112 as core 4 on socket 1 00:06:12.058 EAL: Detected lcore 113 as core 5 on socket 1 00:06:12.058 EAL: Detected lcore 114 as core 6 on socket 1 00:06:12.058 EAL: Detected lcore 115 as core 7 on socket 1 00:06:12.058 EAL: Detected lcore 116 as core 8 on socket 1 00:06:12.058 EAL: Detected lcore 117 as core 9 on socket 1 00:06:12.058 EAL: Detected lcore 118 as core 10 on socket 1 00:06:12.058 EAL: Detected lcore 119 as core 11 on socket 1 00:06:12.058 EAL: Detected lcore 120 as core 12 on socket 1 00:06:12.058 EAL: Detected lcore 121 as core 13 on socket 1 00:06:12.058 EAL: Detected lcore 122 as core 14 on socket 1 00:06:12.058 EAL: Detected lcore 123 as core 15 on socket 1 00:06:12.058 EAL: Detected lcore 124 as core 16 on socket 1 00:06:12.058 EAL: Detected lcore 125 as core 17 on socket 1 00:06:12.058 EAL: Detected lcore 126 as core 18 on socket 1 00:06:12.058 EAL: Detected lcore 127 as core 19 on socket 1 00:06:12.058 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:12.058 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:12.058 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:12.058 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:12.058 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:12.058 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:12.058 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:12.058 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:12.058 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:12.058 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:12.058 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:12.058 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:12.058 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:12.058 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:12.058 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:12.058 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:12.058 EAL: Maximum logical cores by configuration: 128 00:06:12.058 EAL: Detected CPU lcores: 128 00:06:12.058 EAL: Detected NUMA nodes: 2 00:06:12.058 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:12.058 EAL: Detected shared linkage of DPDK 00:06:12.058 EAL: No shared files mode enabled, IPC will be disabled 00:06:12.058 EAL: Bus pci wants IOVA as 'DC' 00:06:12.058 EAL: Buses did not request a specific IOVA mode. 00:06:12.058 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:12.058 EAL: Selected IOVA mode 'VA' 00:06:12.058 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.058 EAL: Probing VFIO support... 00:06:12.058 EAL: IOMMU type 1 (Type 1) is supported 00:06:12.058 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:12.058 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:12.058 EAL: VFIO support initialized 00:06:12.058 EAL: Ask a virtual area of 0x2e000 bytes 00:06:12.058 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:12.058 EAL: Setting up physically contiguous memory... 00:06:12.058 EAL: Setting maximum number of open files to 524288 00:06:12.058 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:12.058 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:12.058 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:12.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.059 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:12.059 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:12.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.059 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:12.059 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:12.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.059 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:12.059 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:12.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.059 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:12.059 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:12.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.059 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:12.059 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:12.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.059 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:12.059 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:12.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.059 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:12.059 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:12.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.059 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:12.059 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:12.059 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:12.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.059 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:12.059 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:12.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.059 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:12.059 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:12.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.059 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:12.059 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:12.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.059 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:12.059 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:12.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.059 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:12.059 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:12.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.059 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:12.059 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:12.059 EAL: Ask a virtual area of 0x61000 bytes 00:06:12.059 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:12.059 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:12.059 EAL: Ask a virtual area of 0x400000000 bytes 00:06:12.059 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:12.059 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:12.059 EAL: Hugepages will be freed exactly as allocated. 00:06:12.059 EAL: No shared files mode enabled, IPC is disabled 00:06:12.059 EAL: No shared files mode enabled, IPC is disabled 00:06:12.059 EAL: TSC frequency is ~2400000 KHz 00:06:12.059 EAL: Main lcore 0 is ready (tid=7fccce519a00;cpuset=[0]) 00:06:12.059 EAL: Trying to obtain current memory policy. 00:06:12.059 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.059 EAL: Restoring previous memory policy: 0 00:06:12.059 EAL: request: mp_malloc_sync 00:06:12.059 EAL: No shared files mode enabled, IPC is disabled 00:06:12.059 EAL: Heap on socket 0 was expanded by 2MB 00:06:12.059 EAL: No shared files mode enabled, IPC is disabled 00:06:12.319 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:12.319 EAL: Mem event callback 'spdk:(nil)' registered 00:06:12.319 00:06:12.319 00:06:12.319 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.319 http://cunit.sourceforge.net/ 00:06:12.319 00:06:12.319 00:06:12.319 Suite: components_suite 00:06:12.319 Test: vtophys_malloc_test ...passed 00:06:12.319 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:12.319 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.319 EAL: Restoring previous memory policy: 4 00:06:12.319 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.319 EAL: request: mp_malloc_sync 00:06:12.319 EAL: No shared files mode enabled, IPC is disabled 00:06:12.319 EAL: Heap on socket 0 was expanded by 4MB 00:06:12.319 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.319 EAL: request: mp_malloc_sync 00:06:12.319 EAL: No shared files mode enabled, IPC is disabled 00:06:12.319 EAL: Heap on socket 0 was shrunk by 4MB 00:06:12.320 EAL: Trying to obtain current memory policy. 00:06:12.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.320 EAL: Restoring previous memory policy: 4 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was expanded by 6MB 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was shrunk by 6MB 00:06:12.320 EAL: Trying to obtain current memory policy. 00:06:12.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.320 EAL: Restoring previous memory policy: 4 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was expanded by 10MB 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was shrunk by 10MB 00:06:12.320 EAL: Trying to obtain current memory policy. 00:06:12.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.320 EAL: Restoring previous memory policy: 4 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was expanded by 18MB 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was shrunk by 18MB 00:06:12.320 EAL: Trying to obtain current memory policy. 00:06:12.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.320 EAL: Restoring previous memory policy: 4 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was expanded by 34MB 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was shrunk by 34MB 00:06:12.320 EAL: Trying to obtain current memory policy. 00:06:12.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.320 EAL: Restoring previous memory policy: 4 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was expanded by 66MB 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was shrunk by 66MB 00:06:12.320 EAL: Trying to obtain current memory policy. 00:06:12.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.320 EAL: Restoring previous memory policy: 4 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was expanded by 130MB 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was shrunk by 130MB 00:06:12.320 EAL: Trying to obtain current memory policy. 00:06:12.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.320 EAL: Restoring previous memory policy: 4 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was expanded by 258MB 00:06:12.320 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.320 EAL: request: mp_malloc_sync 00:06:12.320 EAL: No shared files mode enabled, IPC is disabled 00:06:12.320 EAL: Heap on socket 0 was shrunk by 258MB 00:06:12.320 EAL: Trying to obtain current memory policy. 00:06:12.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.580 EAL: Restoring previous memory policy: 4 00:06:12.580 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.580 EAL: request: mp_malloc_sync 00:06:12.580 EAL: No shared files mode enabled, IPC is disabled 00:06:12.580 EAL: Heap on socket 0 was expanded by 514MB 00:06:12.580 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.580 EAL: request: mp_malloc_sync 00:06:12.580 EAL: No shared files mode enabled, IPC is disabled 00:06:12.580 EAL: Heap on socket 0 was shrunk by 514MB 00:06:12.580 EAL: Trying to obtain current memory policy. 00:06:12.580 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.580 EAL: Restoring previous memory policy: 4 00:06:12.580 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.580 EAL: request: mp_malloc_sync 00:06:12.580 EAL: No shared files mode enabled, IPC is disabled 00:06:12.580 EAL: Heap on socket 0 was expanded by 1026MB 00:06:12.840 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.840 EAL: request: mp_malloc_sync 00:06:12.840 EAL: No shared files mode enabled, IPC is disabled 00:06:12.840 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:12.840 passed 00:06:12.840 00:06:12.840 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.840 suites 1 1 n/a 0 0 00:06:12.840 tests 2 2 2 0 0 00:06:12.840 asserts 497 497 497 0 n/a 00:06:12.840 00:06:12.840 Elapsed time = 0.657 seconds 00:06:12.840 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.840 EAL: request: mp_malloc_sync 00:06:12.840 EAL: No shared files mode enabled, IPC is disabled 00:06:12.840 EAL: Heap on socket 0 was shrunk by 2MB 00:06:12.840 EAL: No shared files mode enabled, IPC is disabled 00:06:12.840 EAL: No shared files mode enabled, IPC is disabled 00:06:12.840 EAL: No shared files mode enabled, IPC is disabled 00:06:12.840 00:06:12.840 real 0m0.786s 00:06:12.840 user 0m0.406s 00:06:12.840 sys 0m0.353s 00:06:12.840 10:13:49 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.840 10:13:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:12.840 ************************************ 00:06:12.840 END TEST env_vtophys 00:06:12.840 ************************************ 00:06:12.840 10:13:50 env -- common/autotest_common.sh@1142 -- # return 0 00:06:12.840 10:13:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:12.840 10:13:50 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.840 10:13:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.840 10:13:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.101 ************************************ 00:06:13.101 START TEST env_pci 00:06:13.101 ************************************ 00:06:13.101 10:13:50 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:13.101 00:06:13.101 00:06:13.101 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.101 http://cunit.sourceforge.net/ 00:06:13.101 00:06:13.101 00:06:13.101 Suite: pci 00:06:13.101 Test: pci_hook ...[2024-07-15 10:13:50.066175] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2720985 has claimed it 00:06:13.101 EAL: Cannot find device (10000:00:01.0) 00:06:13.101 EAL: Failed to attach device on primary process 00:06:13.101 passed 00:06:13.101 00:06:13.101 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.101 suites 1 1 n/a 0 0 00:06:13.101 tests 1 1 1 0 0 00:06:13.101 asserts 25 25 25 0 n/a 00:06:13.101 00:06:13.101 Elapsed time = 0.035 seconds 00:06:13.101 00:06:13.101 real 0m0.056s 00:06:13.101 user 0m0.015s 00:06:13.101 sys 0m0.041s 00:06:13.101 10:13:50 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.101 10:13:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:13.101 ************************************ 00:06:13.101 END TEST env_pci 00:06:13.101 ************************************ 00:06:13.101 10:13:50 env -- common/autotest_common.sh@1142 -- # return 0 00:06:13.101 10:13:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:13.101 10:13:50 env -- env/env.sh@15 -- # uname 00:06:13.101 10:13:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:13.101 10:13:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:13.101 10:13:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:13.101 10:13:50 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:13.101 10:13:50 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.101 10:13:50 env -- common/autotest_common.sh@10 -- # set +x 00:06:13.101 ************************************ 00:06:13.101 START TEST env_dpdk_post_init 00:06:13.101 ************************************ 00:06:13.101 10:13:50 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:13.101 EAL: Detected CPU lcores: 128 00:06:13.101 EAL: Detected NUMA nodes: 2 00:06:13.101 EAL: Detected shared linkage of DPDK 00:06:13.101 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:13.101 EAL: Selected IOVA mode 'VA' 00:06:13.101 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.101 EAL: VFIO support initialized 00:06:13.101 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:13.362 EAL: Using IOMMU type 1 (Type 1) 00:06:13.362 EAL: Ignore mapping IO port bar(1) 00:06:13.362 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:13.622 EAL: Ignore mapping IO port bar(1) 00:06:13.622 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:13.882 EAL: Ignore mapping IO port bar(1) 00:06:13.882 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:14.142 EAL: Ignore mapping IO port bar(1) 00:06:14.142 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:14.142 EAL: Ignore mapping IO port bar(1) 00:06:14.401 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:14.401 EAL: Ignore mapping IO port bar(1) 00:06:14.661 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:14.661 EAL: Ignore mapping IO port bar(1) 00:06:14.921 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:14.921 EAL: Ignore mapping IO port bar(1) 00:06:14.921 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:15.181 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:15.441 EAL: Ignore mapping IO port bar(1) 00:06:15.441 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:15.702 EAL: Ignore mapping IO port bar(1) 00:06:15.702 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:15.962 EAL: Ignore mapping IO port bar(1) 00:06:15.962 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:15.962 EAL: Ignore mapping IO port bar(1) 00:06:16.221 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:16.221 EAL: Ignore mapping IO port bar(1) 00:06:16.480 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:16.480 EAL: Ignore mapping IO port bar(1) 00:06:16.740 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:16.740 EAL: Ignore mapping IO port bar(1) 00:06:16.740 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:17.000 EAL: Ignore mapping IO port bar(1) 00:06:17.000 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:17.000 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:17.000 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:17.261 Starting DPDK initialization... 00:06:17.261 Starting SPDK post initialization... 00:06:17.261 SPDK NVMe probe 00:06:17.261 Attaching to 0000:65:00.0 00:06:17.261 Attached to 0000:65:00.0 00:06:17.261 Cleaning up... 00:06:18.732 00:06:18.732 real 0m5.735s 00:06:18.732 user 0m0.182s 00:06:18.732 sys 0m0.093s 00:06:18.732 10:13:55 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.732 10:13:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:18.732 ************************************ 00:06:18.732 END TEST env_dpdk_post_init 00:06:18.732 ************************************ 00:06:18.992 10:13:55 env -- common/autotest_common.sh@1142 -- # return 0 00:06:18.992 10:13:55 env -- env/env.sh@26 -- # uname 00:06:18.992 10:13:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:18.992 10:13:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:18.992 10:13:55 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.992 10:13:55 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.992 10:13:55 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.992 ************************************ 00:06:18.992 START TEST env_mem_callbacks 00:06:18.992 ************************************ 00:06:18.992 10:13:56 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:18.992 EAL: Detected CPU lcores: 128 00:06:18.992 EAL: Detected NUMA nodes: 2 00:06:18.992 EAL: Detected shared linkage of DPDK 00:06:18.992 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.992 EAL: Selected IOVA mode 'VA' 00:06:18.992 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.992 EAL: VFIO support initialized 00:06:18.992 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.992 00:06:18.992 00:06:18.992 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.992 http://cunit.sourceforge.net/ 00:06:18.992 00:06:18.992 00:06:18.992 Suite: memory 00:06:18.992 Test: test ... 00:06:18.992 register 0x200000200000 2097152 00:06:18.992 malloc 3145728 00:06:18.992 register 0x200000400000 4194304 00:06:18.992 buf 0x200000500000 len 3145728 PASSED 00:06:18.992 malloc 64 00:06:18.992 buf 0x2000004fff40 len 64 PASSED 00:06:18.992 malloc 4194304 00:06:18.992 register 0x200000800000 6291456 00:06:18.992 buf 0x200000a00000 len 4194304 PASSED 00:06:18.992 free 0x200000500000 3145728 00:06:18.992 free 0x2000004fff40 64 00:06:18.992 unregister 0x200000400000 4194304 PASSED 00:06:18.992 free 0x200000a00000 4194304 00:06:18.992 unregister 0x200000800000 6291456 PASSED 00:06:18.992 malloc 8388608 00:06:18.992 register 0x200000400000 10485760 00:06:18.992 buf 0x200000600000 len 8388608 PASSED 00:06:18.992 free 0x200000600000 8388608 00:06:18.992 unregister 0x200000400000 10485760 PASSED 00:06:18.992 passed 00:06:18.992 00:06:18.992 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.993 suites 1 1 n/a 0 0 00:06:18.993 tests 1 1 1 0 0 00:06:18.993 asserts 15 15 15 0 n/a 00:06:18.993 00:06:18.993 Elapsed time = 0.008 seconds 00:06:18.993 00:06:18.993 real 0m0.066s 00:06:18.993 user 0m0.027s 00:06:18.993 sys 0m0.040s 00:06:18.993 10:13:56 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.993 10:13:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:18.993 ************************************ 00:06:18.993 END TEST env_mem_callbacks 00:06:18.993 ************************************ 00:06:18.993 10:13:56 env -- common/autotest_common.sh@1142 -- # return 0 00:06:18.993 00:06:18.993 real 0m7.350s 00:06:18.993 user 0m1.019s 00:06:18.993 sys 0m0.869s 00:06:18.993 10:13:56 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.993 10:13:56 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.993 ************************************ 00:06:18.993 END TEST env 00:06:18.993 ************************************ 00:06:18.993 10:13:56 -- common/autotest_common.sh@1142 -- # return 0 00:06:18.993 10:13:56 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:18.993 10:13:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.993 10:13:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.993 10:13:56 -- common/autotest_common.sh@10 -- # set +x 00:06:18.993 ************************************ 00:06:18.993 START TEST rpc 00:06:18.993 ************************************ 00:06:18.993 10:13:56 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:19.253 * Looking for test storage... 00:06:19.253 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:19.253 10:13:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2722434 00:06:19.253 10:13:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.253 10:13:56 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:19.253 10:13:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2722434 00:06:19.253 10:13:56 rpc -- common/autotest_common.sh@829 -- # '[' -z 2722434 ']' 00:06:19.253 10:13:56 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.253 10:13:56 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.253 10:13:56 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.253 10:13:56 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.253 10:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.253 [2024-07-15 10:13:56.338907] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:19.253 [2024-07-15 10:13:56.338955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722434 ] 00:06:19.253 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.253 [2024-07-15 10:13:56.406897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.515 [2024-07-15 10:13:56.471673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:19.515 [2024-07-15 10:13:56.471709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2722434' to capture a snapshot of events at runtime. 00:06:19.515 [2024-07-15 10:13:56.471717] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.515 [2024-07-15 10:13:56.471723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.515 [2024-07-15 10:13:56.471728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2722434 for offline analysis/debug. 00:06:19.515 [2024-07-15 10:13:56.471746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.085 10:13:57 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.085 10:13:57 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:20.085 10:13:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:20.085 10:13:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:20.085 10:13:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:20.085 10:13:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:20.085 10:13:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.085 10:13:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.085 10:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.085 ************************************ 00:06:20.085 START TEST rpc_integrity 00:06:20.085 ************************************ 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:20.085 { 00:06:20.085 "name": "Malloc0", 00:06:20.085 "aliases": [ 00:06:20.085 "7dd60332-211f-4752-a22d-58c6c1ac7056" 00:06:20.085 ], 00:06:20.085 "product_name": "Malloc disk", 00:06:20.085 "block_size": 512, 00:06:20.085 "num_blocks": 16384, 00:06:20.085 "uuid": "7dd60332-211f-4752-a22d-58c6c1ac7056", 00:06:20.085 "assigned_rate_limits": { 00:06:20.085 "rw_ios_per_sec": 0, 00:06:20.085 "rw_mbytes_per_sec": 0, 00:06:20.085 "r_mbytes_per_sec": 0, 00:06:20.085 "w_mbytes_per_sec": 0 00:06:20.085 }, 00:06:20.085 "claimed": false, 00:06:20.085 "zoned": false, 00:06:20.085 "supported_io_types": { 00:06:20.085 "read": true, 00:06:20.085 "write": true, 00:06:20.085 "unmap": true, 00:06:20.085 "flush": true, 00:06:20.085 "reset": true, 00:06:20.085 "nvme_admin": false, 00:06:20.085 "nvme_io": false, 00:06:20.085 "nvme_io_md": false, 00:06:20.085 "write_zeroes": true, 00:06:20.085 "zcopy": true, 00:06:20.085 "get_zone_info": false, 00:06:20.085 "zone_management": false, 00:06:20.085 "zone_append": false, 00:06:20.085 "compare": false, 00:06:20.085 "compare_and_write": false, 00:06:20.085 "abort": true, 00:06:20.085 "seek_hole": false, 00:06:20.085 "seek_data": false, 00:06:20.085 "copy": true, 00:06:20.085 "nvme_iov_md": false 00:06:20.085 }, 00:06:20.085 "memory_domains": [ 00:06:20.085 { 00:06:20.085 "dma_device_id": "system", 00:06:20.085 "dma_device_type": 1 00:06:20.085 }, 00:06:20.085 { 00:06:20.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.085 "dma_device_type": 2 00:06:20.085 } 00:06:20.085 ], 00:06:20.085 "driver_specific": {} 00:06:20.085 } 00:06:20.085 ]' 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.085 [2024-07-15 10:13:57.267449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:20.085 [2024-07-15 10:13:57.267482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.085 [2024-07-15 10:13:57.267495] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e75490 00:06:20.085 [2024-07-15 10:13:57.267502] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.085 [2024-07-15 10:13:57.268820] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.085 [2024-07-15 10:13:57.268840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:20.085 Passthru0 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.085 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.085 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.346 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:20.346 { 00:06:20.346 "name": "Malloc0", 00:06:20.346 "aliases": [ 00:06:20.346 "7dd60332-211f-4752-a22d-58c6c1ac7056" 00:06:20.346 ], 00:06:20.346 "product_name": "Malloc disk", 00:06:20.346 "block_size": 512, 00:06:20.346 "num_blocks": 16384, 00:06:20.346 "uuid": "7dd60332-211f-4752-a22d-58c6c1ac7056", 00:06:20.346 "assigned_rate_limits": { 00:06:20.346 "rw_ios_per_sec": 0, 00:06:20.346 "rw_mbytes_per_sec": 0, 00:06:20.346 "r_mbytes_per_sec": 0, 00:06:20.346 "w_mbytes_per_sec": 0 00:06:20.346 }, 00:06:20.346 "claimed": true, 00:06:20.346 "claim_type": "exclusive_write", 00:06:20.346 "zoned": false, 00:06:20.346 "supported_io_types": { 00:06:20.346 "read": true, 00:06:20.346 "write": true, 00:06:20.346 "unmap": true, 00:06:20.346 "flush": true, 00:06:20.346 "reset": true, 00:06:20.346 "nvme_admin": false, 00:06:20.346 "nvme_io": false, 00:06:20.346 "nvme_io_md": false, 00:06:20.346 "write_zeroes": true, 00:06:20.346 "zcopy": true, 00:06:20.346 "get_zone_info": false, 00:06:20.346 "zone_management": false, 00:06:20.346 "zone_append": false, 00:06:20.346 "compare": false, 00:06:20.346 "compare_and_write": false, 00:06:20.346 "abort": true, 00:06:20.346 "seek_hole": false, 00:06:20.346 "seek_data": false, 00:06:20.346 "copy": true, 00:06:20.346 "nvme_iov_md": false 00:06:20.346 }, 00:06:20.346 "memory_domains": [ 00:06:20.346 { 00:06:20.346 "dma_device_id": "system", 00:06:20.346 "dma_device_type": 1 00:06:20.346 }, 00:06:20.346 { 00:06:20.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.346 "dma_device_type": 2 00:06:20.346 } 00:06:20.346 ], 00:06:20.346 "driver_specific": {} 00:06:20.346 }, 00:06:20.346 { 00:06:20.346 "name": "Passthru0", 00:06:20.346 "aliases": [ 00:06:20.346 "a98cdcce-a9e8-5c6e-9d3e-33f4545ab3a6" 00:06:20.346 ], 00:06:20.346 "product_name": "passthru", 00:06:20.346 "block_size": 512, 00:06:20.346 "num_blocks": 16384, 00:06:20.346 "uuid": "a98cdcce-a9e8-5c6e-9d3e-33f4545ab3a6", 00:06:20.346 "assigned_rate_limits": { 00:06:20.346 "rw_ios_per_sec": 0, 00:06:20.346 "rw_mbytes_per_sec": 0, 00:06:20.346 "r_mbytes_per_sec": 0, 00:06:20.346 "w_mbytes_per_sec": 0 00:06:20.346 }, 00:06:20.346 "claimed": false, 00:06:20.346 "zoned": false, 00:06:20.346 "supported_io_types": { 00:06:20.346 "read": true, 00:06:20.346 "write": true, 00:06:20.346 "unmap": true, 00:06:20.346 "flush": true, 00:06:20.346 "reset": true, 00:06:20.346 "nvme_admin": false, 00:06:20.346 "nvme_io": false, 00:06:20.346 "nvme_io_md": false, 00:06:20.346 "write_zeroes": true, 00:06:20.346 "zcopy": true, 00:06:20.346 "get_zone_info": false, 00:06:20.346 "zone_management": false, 00:06:20.346 "zone_append": false, 00:06:20.346 "compare": false, 00:06:20.346 "compare_and_write": false, 00:06:20.346 "abort": true, 00:06:20.346 "seek_hole": false, 00:06:20.346 "seek_data": false, 00:06:20.346 "copy": true, 00:06:20.346 "nvme_iov_md": false 00:06:20.346 }, 00:06:20.346 "memory_domains": [ 00:06:20.346 { 00:06:20.346 "dma_device_id": "system", 00:06:20.346 "dma_device_type": 1 00:06:20.346 }, 00:06:20.346 { 00:06:20.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.346 "dma_device_type": 2 00:06:20.346 } 00:06:20.346 ], 00:06:20.346 "driver_specific": { 00:06:20.346 "passthru": { 00:06:20.346 "name": "Passthru0", 00:06:20.346 "base_bdev_name": "Malloc0" 00:06:20.346 } 00:06:20.346 } 00:06:20.346 } 00:06:20.346 ]' 00:06:20.346 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:20.346 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:20.346 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.346 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.346 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.346 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:20.346 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:20.346 10:13:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:20.346 00:06:20.346 real 0m0.290s 00:06:20.346 user 0m0.187s 00:06:20.346 sys 0m0.039s 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.346 10:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.346 ************************************ 00:06:20.346 END TEST rpc_integrity 00:06:20.346 ************************************ 00:06:20.346 10:13:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:20.346 10:13:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:20.346 10:13:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.346 10:13:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.346 10:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.346 ************************************ 00:06:20.346 START TEST rpc_plugins 00:06:20.346 ************************************ 00:06:20.346 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:20.346 10:13:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:20.346 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.346 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.346 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.346 10:13:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:20.346 10:13:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:20.346 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.346 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.346 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.346 10:13:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:20.346 { 00:06:20.346 "name": "Malloc1", 00:06:20.346 "aliases": [ 00:06:20.346 "ff5fa68b-c876-4698-bc63-dbd047f0350b" 00:06:20.346 ], 00:06:20.346 "product_name": "Malloc disk", 00:06:20.346 "block_size": 4096, 00:06:20.346 "num_blocks": 256, 00:06:20.346 "uuid": "ff5fa68b-c876-4698-bc63-dbd047f0350b", 00:06:20.346 "assigned_rate_limits": { 00:06:20.346 "rw_ios_per_sec": 0, 00:06:20.346 "rw_mbytes_per_sec": 0, 00:06:20.346 "r_mbytes_per_sec": 0, 00:06:20.346 "w_mbytes_per_sec": 0 00:06:20.346 }, 00:06:20.346 "claimed": false, 00:06:20.346 "zoned": false, 00:06:20.346 "supported_io_types": { 00:06:20.346 "read": true, 00:06:20.346 "write": true, 00:06:20.346 "unmap": true, 00:06:20.346 "flush": true, 00:06:20.346 "reset": true, 00:06:20.346 "nvme_admin": false, 00:06:20.346 "nvme_io": false, 00:06:20.346 "nvme_io_md": false, 00:06:20.346 "write_zeroes": true, 00:06:20.346 "zcopy": true, 00:06:20.346 "get_zone_info": false, 00:06:20.346 "zone_management": false, 00:06:20.346 "zone_append": false, 00:06:20.346 "compare": false, 00:06:20.346 "compare_and_write": false, 00:06:20.346 "abort": true, 00:06:20.346 "seek_hole": false, 00:06:20.346 "seek_data": false, 00:06:20.346 "copy": true, 00:06:20.346 "nvme_iov_md": false 00:06:20.346 }, 00:06:20.346 "memory_domains": [ 00:06:20.346 { 00:06:20.346 "dma_device_id": "system", 00:06:20.346 "dma_device_type": 1 00:06:20.346 }, 00:06:20.346 { 00:06:20.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.346 "dma_device_type": 2 00:06:20.346 } 00:06:20.346 ], 00:06:20.346 "driver_specific": {} 00:06:20.346 } 00:06:20.346 ]' 00:06:20.346 10:13:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:20.615 10:13:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:20.615 10:13:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:20.615 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.615 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.616 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.616 10:13:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:20.616 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.616 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.616 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.616 10:13:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:20.616 10:13:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:20.616 10:13:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:20.616 00:06:20.616 real 0m0.153s 00:06:20.616 user 0m0.093s 00:06:20.616 sys 0m0.021s 00:06:20.616 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.616 10:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.616 ************************************ 00:06:20.616 END TEST rpc_plugins 00:06:20.616 ************************************ 00:06:20.616 10:13:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:20.616 10:13:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:20.616 10:13:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.616 10:13:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.616 10:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.616 ************************************ 00:06:20.616 START TEST rpc_trace_cmd_test 00:06:20.616 ************************************ 00:06:20.616 10:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:20.616 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:20.616 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:20.616 10:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.616 10:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.616 10:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.616 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:20.616 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2722434", 00:06:20.616 "tpoint_group_mask": "0x8", 00:06:20.616 "iscsi_conn": { 00:06:20.616 "mask": "0x2", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "scsi": { 00:06:20.616 "mask": "0x4", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "bdev": { 00:06:20.616 "mask": "0x8", 00:06:20.616 "tpoint_mask": "0xffffffffffffffff" 00:06:20.616 }, 00:06:20.616 "nvmf_rdma": { 00:06:20.616 "mask": "0x10", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "nvmf_tcp": { 00:06:20.616 "mask": "0x20", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "ftl": { 00:06:20.616 "mask": "0x40", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "blobfs": { 00:06:20.616 "mask": "0x80", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "dsa": { 00:06:20.616 "mask": "0x200", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "thread": { 00:06:20.616 "mask": "0x400", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "nvme_pcie": { 00:06:20.616 "mask": "0x800", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "iaa": { 00:06:20.616 "mask": "0x1000", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "nvme_tcp": { 00:06:20.616 "mask": "0x2000", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "bdev_nvme": { 00:06:20.616 "mask": "0x4000", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 }, 00:06:20.616 "sock": { 00:06:20.616 "mask": "0x8000", 00:06:20.616 "tpoint_mask": "0x0" 00:06:20.616 } 00:06:20.616 }' 00:06:20.616 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:20.616 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:20.616 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:20.886 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:20.886 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:20.886 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:20.886 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:20.886 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:20.886 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:20.886 10:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:20.886 00:06:20.886 real 0m0.244s 00:06:20.886 user 0m0.205s 00:06:20.886 sys 0m0.030s 00:06:20.886 10:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.886 10:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.886 ************************************ 00:06:20.886 END TEST rpc_trace_cmd_test 00:06:20.886 ************************************ 00:06:20.886 10:13:57 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:20.886 10:13:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:20.886 10:13:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:20.886 10:13:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:20.886 10:13:57 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.886 10:13:57 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.886 10:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.886 ************************************ 00:06:20.886 START TEST rpc_daemon_integrity 00:06:20.886 ************************************ 00:06:20.886 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:20.886 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:20.886 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.886 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.886 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.886 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:20.886 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:21.156 { 00:06:21.156 "name": "Malloc2", 00:06:21.156 "aliases": [ 00:06:21.156 "9a003ef8-456b-4172-867d-763dcee07c9e" 00:06:21.156 ], 00:06:21.156 "product_name": "Malloc disk", 00:06:21.156 "block_size": 512, 00:06:21.156 "num_blocks": 16384, 00:06:21.156 "uuid": "9a003ef8-456b-4172-867d-763dcee07c9e", 00:06:21.156 "assigned_rate_limits": { 00:06:21.156 "rw_ios_per_sec": 0, 00:06:21.156 "rw_mbytes_per_sec": 0, 00:06:21.156 "r_mbytes_per_sec": 0, 00:06:21.156 "w_mbytes_per_sec": 0 00:06:21.156 }, 00:06:21.156 "claimed": false, 00:06:21.156 "zoned": false, 00:06:21.156 "supported_io_types": { 00:06:21.156 "read": true, 00:06:21.156 "write": true, 00:06:21.156 "unmap": true, 00:06:21.156 "flush": true, 00:06:21.156 "reset": true, 00:06:21.156 "nvme_admin": false, 00:06:21.156 "nvme_io": false, 00:06:21.156 "nvme_io_md": false, 00:06:21.156 "write_zeroes": true, 00:06:21.156 "zcopy": true, 00:06:21.156 "get_zone_info": false, 00:06:21.156 "zone_management": false, 00:06:21.156 "zone_append": false, 00:06:21.156 "compare": false, 00:06:21.156 "compare_and_write": false, 00:06:21.156 "abort": true, 00:06:21.156 "seek_hole": false, 00:06:21.156 "seek_data": false, 00:06:21.156 "copy": true, 00:06:21.156 "nvme_iov_md": false 00:06:21.156 }, 00:06:21.156 "memory_domains": [ 00:06:21.156 { 00:06:21.156 "dma_device_id": "system", 00:06:21.156 "dma_device_type": 1 00:06:21.156 }, 00:06:21.156 { 00:06:21.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.156 "dma_device_type": 2 00:06:21.156 } 00:06:21.156 ], 00:06:21.156 "driver_specific": {} 00:06:21.156 } 00:06:21.156 ]' 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.156 [2024-07-15 10:13:58.169892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:21.156 [2024-07-15 10:13:58.169920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.156 [2024-07-15 10:13:58.169936] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2018f70 00:06:21.156 [2024-07-15 10:13:58.169943] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.156 [2024-07-15 10:13:58.171143] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.156 [2024-07-15 10:13:58.171162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:21.156 Passthru0 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:21.156 { 00:06:21.156 "name": "Malloc2", 00:06:21.156 "aliases": [ 00:06:21.156 "9a003ef8-456b-4172-867d-763dcee07c9e" 00:06:21.156 ], 00:06:21.156 "product_name": "Malloc disk", 00:06:21.156 "block_size": 512, 00:06:21.156 "num_blocks": 16384, 00:06:21.156 "uuid": "9a003ef8-456b-4172-867d-763dcee07c9e", 00:06:21.156 "assigned_rate_limits": { 00:06:21.156 "rw_ios_per_sec": 0, 00:06:21.156 "rw_mbytes_per_sec": 0, 00:06:21.156 "r_mbytes_per_sec": 0, 00:06:21.156 "w_mbytes_per_sec": 0 00:06:21.156 }, 00:06:21.156 "claimed": true, 00:06:21.156 "claim_type": "exclusive_write", 00:06:21.156 "zoned": false, 00:06:21.156 "supported_io_types": { 00:06:21.156 "read": true, 00:06:21.156 "write": true, 00:06:21.156 "unmap": true, 00:06:21.156 "flush": true, 00:06:21.156 "reset": true, 00:06:21.156 "nvme_admin": false, 00:06:21.156 "nvme_io": false, 00:06:21.156 "nvme_io_md": false, 00:06:21.156 "write_zeroes": true, 00:06:21.156 "zcopy": true, 00:06:21.156 "get_zone_info": false, 00:06:21.156 "zone_management": false, 00:06:21.156 "zone_append": false, 00:06:21.156 "compare": false, 00:06:21.156 "compare_and_write": false, 00:06:21.156 "abort": true, 00:06:21.156 "seek_hole": false, 00:06:21.156 "seek_data": false, 00:06:21.156 "copy": true, 00:06:21.156 "nvme_iov_md": false 00:06:21.156 }, 00:06:21.156 "memory_domains": [ 00:06:21.156 { 00:06:21.156 "dma_device_id": "system", 00:06:21.156 "dma_device_type": 1 00:06:21.156 }, 00:06:21.156 { 00:06:21.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.156 "dma_device_type": 2 00:06:21.156 } 00:06:21.156 ], 00:06:21.156 "driver_specific": {} 00:06:21.156 }, 00:06:21.156 { 00:06:21.156 "name": "Passthru0", 00:06:21.156 "aliases": [ 00:06:21.156 "0272dd57-b43a-5f9d-8086-2827676634d9" 00:06:21.156 ], 00:06:21.156 "product_name": "passthru", 00:06:21.156 "block_size": 512, 00:06:21.156 "num_blocks": 16384, 00:06:21.156 "uuid": "0272dd57-b43a-5f9d-8086-2827676634d9", 00:06:21.156 "assigned_rate_limits": { 00:06:21.156 "rw_ios_per_sec": 0, 00:06:21.156 "rw_mbytes_per_sec": 0, 00:06:21.156 "r_mbytes_per_sec": 0, 00:06:21.156 "w_mbytes_per_sec": 0 00:06:21.156 }, 00:06:21.156 "claimed": false, 00:06:21.156 "zoned": false, 00:06:21.156 "supported_io_types": { 00:06:21.156 "read": true, 00:06:21.156 "write": true, 00:06:21.156 "unmap": true, 00:06:21.156 "flush": true, 00:06:21.156 "reset": true, 00:06:21.156 "nvme_admin": false, 00:06:21.156 "nvme_io": false, 00:06:21.156 "nvme_io_md": false, 00:06:21.156 "write_zeroes": true, 00:06:21.156 "zcopy": true, 00:06:21.156 "get_zone_info": false, 00:06:21.156 "zone_management": false, 00:06:21.156 "zone_append": false, 00:06:21.156 "compare": false, 00:06:21.156 "compare_and_write": false, 00:06:21.156 "abort": true, 00:06:21.156 "seek_hole": false, 00:06:21.156 "seek_data": false, 00:06:21.156 "copy": true, 00:06:21.156 "nvme_iov_md": false 00:06:21.156 }, 00:06:21.156 "memory_domains": [ 00:06:21.156 { 00:06:21.156 "dma_device_id": "system", 00:06:21.156 "dma_device_type": 1 00:06:21.156 }, 00:06:21.156 { 00:06:21.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.156 "dma_device_type": 2 00:06:21.156 } 00:06:21.156 ], 00:06:21.156 "driver_specific": { 00:06:21.156 "passthru": { 00:06:21.156 "name": "Passthru0", 00:06:21.156 "base_bdev_name": "Malloc2" 00:06:21.156 } 00:06:21.156 } 00:06:21.156 } 00:06:21.156 ]' 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.156 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:21.157 00:06:21.157 real 0m0.292s 00:06:21.157 user 0m0.186s 00:06:21.157 sys 0m0.041s 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.157 10:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.157 ************************************ 00:06:21.157 END TEST rpc_daemon_integrity 00:06:21.157 ************************************ 00:06:21.418 10:13:58 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:21.418 10:13:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:21.418 10:13:58 rpc -- rpc/rpc.sh@84 -- # killprocess 2722434 00:06:21.418 10:13:58 rpc -- common/autotest_common.sh@948 -- # '[' -z 2722434 ']' 00:06:21.418 10:13:58 rpc -- common/autotest_common.sh@952 -- # kill -0 2722434 00:06:21.419 10:13:58 rpc -- common/autotest_common.sh@953 -- # uname 00:06:21.419 10:13:58 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.419 10:13:58 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2722434 00:06:21.419 10:13:58 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.419 10:13:58 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.419 10:13:58 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2722434' 00:06:21.419 killing process with pid 2722434 00:06:21.419 10:13:58 rpc -- common/autotest_common.sh@967 -- # kill 2722434 00:06:21.419 10:13:58 rpc -- common/autotest_common.sh@972 -- # wait 2722434 00:06:21.679 00:06:21.679 real 0m2.438s 00:06:21.679 user 0m3.196s 00:06:21.679 sys 0m0.689s 00:06:21.679 10:13:58 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.679 10:13:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.679 ************************************ 00:06:21.679 END TEST rpc 00:06:21.679 ************************************ 00:06:21.679 10:13:58 -- common/autotest_common.sh@1142 -- # return 0 00:06:21.679 10:13:58 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:21.679 10:13:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.679 10:13:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.679 10:13:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.679 ************************************ 00:06:21.679 START TEST skip_rpc 00:06:21.679 ************************************ 00:06:21.679 10:13:58 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:21.679 * Looking for test storage... 00:06:21.679 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:21.679 10:13:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:21.679 10:13:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:21.679 10:13:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:21.679 10:13:58 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.679 10:13:58 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.679 10:13:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.679 ************************************ 00:06:21.679 START TEST skip_rpc 00:06:21.679 ************************************ 00:06:21.679 10:13:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:21.679 10:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2722979 00:06:21.679 10:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.679 10:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:21.679 10:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:21.950 [2024-07-15 10:13:58.892093] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:21.950 [2024-07-15 10:13:58.892156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2722979 ] 00:06:21.950 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.950 [2024-07-15 10:13:58.962884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.950 [2024-07-15 10:13:59.039507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2722979 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 2722979 ']' 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 2722979 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2722979 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2722979' 00:06:27.299 killing process with pid 2722979 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 2722979 00:06:27.299 10:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 2722979 00:06:27.299 00:06:27.299 real 0m5.280s 00:06:27.299 user 0m5.073s 00:06:27.299 sys 0m0.236s 00:06:27.299 10:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.299 10:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.299 ************************************ 00:06:27.299 END TEST skip_rpc 00:06:27.299 ************************************ 00:06:27.299 10:14:04 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:27.299 10:14:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:27.299 10:14:04 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.299 10:14:04 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.299 10:14:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.299 ************************************ 00:06:27.299 START TEST skip_rpc_with_json 00:06:27.299 ************************************ 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2724185 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2724185 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 2724185 ']' 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.299 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.299 [2024-07-15 10:14:04.246534] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:27.299 [2024-07-15 10:14:04.246588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2724185 ] 00:06:27.299 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.299 [2024-07-15 10:14:04.315552] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.299 [2024-07-15 10:14:04.387772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 [2024-07-15 10:14:04.562693] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:27.560 request: 00:06:27.560 { 00:06:27.560 "trtype": "tcp", 00:06:27.560 "method": "nvmf_get_transports", 00:06:27.560 "req_id": 1 00:06:27.560 } 00:06:27.560 Got JSON-RPC error response 00:06:27.560 response: 00:06:27.560 { 00:06:27.560 "code": -19, 00:06:27.560 "message": "No such device" 00:06:27.560 } 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 [2024-07-15 10:14:04.574817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.560 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:27.560 { 00:06:27.560 "subsystems": [ 00:06:27.560 { 00:06:27.560 "subsystem": "keyring", 00:06:27.560 "config": [] 00:06:27.560 }, 00:06:27.560 { 00:06:27.560 "subsystem": "iobuf", 00:06:27.560 "config": [ 00:06:27.560 { 00:06:27.560 "method": "iobuf_set_options", 00:06:27.560 "params": { 00:06:27.560 "small_pool_count": 8192, 00:06:27.560 "large_pool_count": 1024, 00:06:27.560 "small_bufsize": 8192, 00:06:27.560 "large_bufsize": 135168 00:06:27.560 } 00:06:27.560 } 00:06:27.560 ] 00:06:27.560 }, 00:06:27.560 { 00:06:27.560 "subsystem": "sock", 00:06:27.560 "config": [ 00:06:27.560 { 00:06:27.560 "method": "sock_set_default_impl", 00:06:27.560 "params": { 00:06:27.560 "impl_name": "posix" 00:06:27.560 } 00:06:27.560 }, 00:06:27.560 { 00:06:27.560 "method": "sock_impl_set_options", 00:06:27.560 "params": { 00:06:27.560 "impl_name": "ssl", 00:06:27.560 "recv_buf_size": 4096, 00:06:27.560 "send_buf_size": 4096, 00:06:27.560 "enable_recv_pipe": true, 00:06:27.560 "enable_quickack": false, 00:06:27.560 "enable_placement_id": 0, 00:06:27.560 "enable_zerocopy_send_server": true, 00:06:27.560 "enable_zerocopy_send_client": false, 00:06:27.560 "zerocopy_threshold": 0, 00:06:27.560 "tls_version": 0, 00:06:27.560 "enable_ktls": false 00:06:27.560 } 00:06:27.560 }, 00:06:27.560 { 00:06:27.560 "method": "sock_impl_set_options", 00:06:27.560 "params": { 00:06:27.560 "impl_name": "posix", 00:06:27.560 "recv_buf_size": 2097152, 00:06:27.560 "send_buf_size": 2097152, 00:06:27.560 "enable_recv_pipe": true, 00:06:27.560 "enable_quickack": false, 00:06:27.560 "enable_placement_id": 0, 00:06:27.560 "enable_zerocopy_send_server": true, 00:06:27.560 "enable_zerocopy_send_client": false, 00:06:27.560 "zerocopy_threshold": 0, 00:06:27.560 "tls_version": 0, 00:06:27.560 "enable_ktls": false 00:06:27.560 } 00:06:27.560 } 00:06:27.560 ] 00:06:27.560 }, 00:06:27.560 { 00:06:27.560 "subsystem": "vmd", 00:06:27.560 "config": [] 00:06:27.560 }, 00:06:27.560 { 00:06:27.560 "subsystem": "accel", 00:06:27.560 "config": [ 00:06:27.560 { 00:06:27.560 "method": "accel_set_options", 00:06:27.560 "params": { 00:06:27.560 "small_cache_size": 128, 00:06:27.560 "large_cache_size": 16, 00:06:27.560 "task_count": 2048, 00:06:27.560 "sequence_count": 2048, 00:06:27.560 "buf_count": 2048 00:06:27.560 } 00:06:27.560 } 00:06:27.560 ] 00:06:27.560 }, 00:06:27.560 { 00:06:27.560 "subsystem": "bdev", 00:06:27.560 "config": [ 00:06:27.560 { 00:06:27.560 "method": "bdev_set_options", 00:06:27.560 "params": { 00:06:27.560 "bdev_io_pool_size": 65535, 00:06:27.560 "bdev_io_cache_size": 256, 00:06:27.560 "bdev_auto_examine": true, 00:06:27.560 "iobuf_small_cache_size": 128, 00:06:27.560 "iobuf_large_cache_size": 16 00:06:27.560 } 00:06:27.560 }, 00:06:27.560 { 00:06:27.560 "method": "bdev_raid_set_options", 00:06:27.560 "params": { 00:06:27.560 "process_window_size_kb": 1024 00:06:27.560 } 00:06:27.560 }, 00:06:27.560 { 00:06:27.560 "method": "bdev_iscsi_set_options", 00:06:27.560 "params": { 00:06:27.560 "timeout_sec": 30 00:06:27.560 } 00:06:27.560 }, 00:06:27.560 { 00:06:27.560 "method": "bdev_nvme_set_options", 00:06:27.560 "params": { 00:06:27.560 "action_on_timeout": "none", 00:06:27.560 "timeout_us": 0, 00:06:27.560 "timeout_admin_us": 0, 00:06:27.560 "keep_alive_timeout_ms": 10000, 00:06:27.560 "arbitration_burst": 0, 00:06:27.560 "low_priority_weight": 0, 00:06:27.560 "medium_priority_weight": 0, 00:06:27.560 "high_priority_weight": 0, 00:06:27.560 "nvme_adminq_poll_period_us": 10000, 00:06:27.560 "nvme_ioq_poll_period_us": 0, 00:06:27.560 "io_queue_requests": 0, 00:06:27.560 "delay_cmd_submit": true, 00:06:27.560 "transport_retry_count": 4, 00:06:27.560 "bdev_retry_count": 3, 00:06:27.560 "transport_ack_timeout": 0, 00:06:27.560 "ctrlr_loss_timeout_sec": 0, 00:06:27.560 "reconnect_delay_sec": 0, 00:06:27.560 "fast_io_fail_timeout_sec": 0, 00:06:27.560 "disable_auto_failback": false, 00:06:27.560 "generate_uuids": false, 00:06:27.560 "transport_tos": 0, 00:06:27.560 "nvme_error_stat": false, 00:06:27.560 "rdma_srq_size": 0, 00:06:27.561 "io_path_stat": false, 00:06:27.561 "allow_accel_sequence": false, 00:06:27.561 "rdma_max_cq_size": 0, 00:06:27.561 "rdma_cm_event_timeout_ms": 0, 00:06:27.561 "dhchap_digests": [ 00:06:27.561 "sha256", 00:06:27.561 "sha384", 00:06:27.561 "sha512" 00:06:27.561 ], 00:06:27.561 "dhchap_dhgroups": [ 00:06:27.561 "null", 00:06:27.561 "ffdhe2048", 00:06:27.561 "ffdhe3072", 00:06:27.561 "ffdhe4096", 00:06:27.561 "ffdhe6144", 00:06:27.561 "ffdhe8192" 00:06:27.561 ] 00:06:27.561 } 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "method": "bdev_nvme_set_hotplug", 00:06:27.561 "params": { 00:06:27.561 "period_us": 100000, 00:06:27.561 "enable": false 00:06:27.561 } 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "method": "bdev_wait_for_examine" 00:06:27.561 } 00:06:27.561 ] 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "subsystem": "scsi", 00:06:27.561 "config": null 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "subsystem": "scheduler", 00:06:27.561 "config": [ 00:06:27.561 { 00:06:27.561 "method": "framework_set_scheduler", 00:06:27.561 "params": { 00:06:27.561 "name": "static" 00:06:27.561 } 00:06:27.561 } 00:06:27.561 ] 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "subsystem": "vhost_scsi", 00:06:27.561 "config": [] 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "subsystem": "vhost_blk", 00:06:27.561 "config": [] 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "subsystem": "ublk", 00:06:27.561 "config": [] 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "subsystem": "nbd", 00:06:27.561 "config": [] 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "subsystem": "nvmf", 00:06:27.561 "config": [ 00:06:27.561 { 00:06:27.561 "method": "nvmf_set_config", 00:06:27.561 "params": { 00:06:27.561 "discovery_filter": "match_any", 00:06:27.561 "admin_cmd_passthru": { 00:06:27.561 "identify_ctrlr": false 00:06:27.561 } 00:06:27.561 } 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "method": "nvmf_set_max_subsystems", 00:06:27.561 "params": { 00:06:27.561 "max_subsystems": 1024 00:06:27.561 } 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "method": "nvmf_set_crdt", 00:06:27.561 "params": { 00:06:27.561 "crdt1": 0, 00:06:27.561 "crdt2": 0, 00:06:27.561 "crdt3": 0 00:06:27.561 } 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "method": "nvmf_create_transport", 00:06:27.561 "params": { 00:06:27.561 "trtype": "TCP", 00:06:27.561 "max_queue_depth": 128, 00:06:27.561 "max_io_qpairs_per_ctrlr": 127, 00:06:27.561 "in_capsule_data_size": 4096, 00:06:27.561 "max_io_size": 131072, 00:06:27.561 "io_unit_size": 131072, 00:06:27.561 "max_aq_depth": 128, 00:06:27.561 "num_shared_buffers": 511, 00:06:27.561 "buf_cache_size": 4294967295, 00:06:27.561 "dif_insert_or_strip": false, 00:06:27.561 "zcopy": false, 00:06:27.561 "c2h_success": true, 00:06:27.561 "sock_priority": 0, 00:06:27.561 "abort_timeout_sec": 1, 00:06:27.561 "ack_timeout": 0, 00:06:27.561 "data_wr_pool_size": 0 00:06:27.561 } 00:06:27.561 } 00:06:27.561 ] 00:06:27.561 }, 00:06:27.561 { 00:06:27.561 "subsystem": "iscsi", 00:06:27.561 "config": [ 00:06:27.561 { 00:06:27.561 "method": "iscsi_set_options", 00:06:27.561 "params": { 00:06:27.561 "node_base": "iqn.2016-06.io.spdk", 00:06:27.561 "max_sessions": 128, 00:06:27.561 "max_connections_per_session": 2, 00:06:27.561 "max_queue_depth": 64, 00:06:27.561 "default_time2wait": 2, 00:06:27.561 "default_time2retain": 20, 00:06:27.561 "first_burst_length": 8192, 00:06:27.561 "immediate_data": true, 00:06:27.561 "allow_duplicated_isid": false, 00:06:27.561 "error_recovery_level": 0, 00:06:27.561 "nop_timeout": 60, 00:06:27.561 "nop_in_interval": 30, 00:06:27.561 "disable_chap": false, 00:06:27.561 "require_chap": false, 00:06:27.561 "mutual_chap": false, 00:06:27.561 "chap_group": 0, 00:06:27.561 "max_large_datain_per_connection": 64, 00:06:27.561 "max_r2t_per_connection": 4, 00:06:27.561 "pdu_pool_size": 36864, 00:06:27.561 "immediate_data_pool_size": 16384, 00:06:27.561 "data_out_pool_size": 2048 00:06:27.561 } 00:06:27.561 } 00:06:27.561 ] 00:06:27.561 } 00:06:27.561 ] 00:06:27.561 } 00:06:27.561 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:27.561 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2724185 00:06:27.561 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2724185 ']' 00:06:27.561 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2724185 00:06:27.561 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:27.561 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.561 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2724185 00:06:27.822 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.822 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.822 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2724185' 00:06:27.822 killing process with pid 2724185 00:06:27.822 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2724185 00:06:27.822 10:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2724185 00:06:27.822 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2724330 00:06:27.822 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:27.822 10:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:33.101 10:14:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2724330 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 2724330 ']' 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 2724330 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2724330 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2724330' 00:06:33.101 killing process with pid 2724330 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 2724330 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 2724330 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:06:33.101 00:06:33.101 real 0m6.081s 00:06:33.101 user 0m5.919s 00:06:33.101 sys 0m0.504s 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.101 10:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.101 ************************************ 00:06:33.101 END TEST skip_rpc_with_json 00:06:33.101 ************************************ 00:06:33.361 10:14:10 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:33.361 10:14:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:33.361 10:14:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.361 10:14:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.361 10:14:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.361 ************************************ 00:06:33.361 START TEST skip_rpc_with_delay 00:06:33.361 ************************************ 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.361 [2024-07-15 10:14:10.416632] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:33.361 [2024-07-15 10:14:10.416704] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.361 00:06:33.361 real 0m0.079s 00:06:33.361 user 0m0.053s 00:06:33.361 sys 0m0.025s 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.361 10:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:33.361 ************************************ 00:06:33.361 END TEST skip_rpc_with_delay 00:06:33.361 ************************************ 00:06:33.361 10:14:10 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:33.361 10:14:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:33.361 10:14:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:33.361 10:14:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:33.361 10:14:10 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.361 10:14:10 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.361 10:14:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.361 ************************************ 00:06:33.361 START TEST exit_on_failed_rpc_init 00:06:33.361 ************************************ 00:06:33.361 10:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:33.361 10:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2725402 00:06:33.361 10:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2725402 00:06:33.361 10:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.361 10:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 2725402 ']' 00:06:33.361 10:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.361 10:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.361 10:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.361 10:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.361 10:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:33.622 [2024-07-15 10:14:10.561765] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:33.622 [2024-07-15 10:14:10.561817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725402 ] 00:06:33.622 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.622 [2024-07-15 10:14:10.630807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.622 [2024-07-15 10:14:10.704446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.192 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:34.193 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:34.193 [2024-07-15 10:14:11.360087] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:34.193 [2024-07-15 10:14:11.360140] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725727 ] 00:06:34.193 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.453 [2024-07-15 10:14:11.442457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.453 [2024-07-15 10:14:11.506998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.453 [2024-07-15 10:14:11.507060] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:34.453 [2024-07-15 10:14:11.507069] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:34.453 [2024-07-15 10:14:11.507076] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2725402 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 2725402 ']' 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 2725402 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2725402 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2725402' 00:06:34.453 killing process with pid 2725402 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 2725402 00:06:34.453 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 2725402 00:06:34.712 00:06:34.712 real 0m1.318s 00:06:34.712 user 0m1.526s 00:06:34.712 sys 0m0.366s 00:06:34.712 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.712 10:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:34.712 ************************************ 00:06:34.712 END TEST exit_on_failed_rpc_init 00:06:34.712 ************************************ 00:06:34.712 10:14:11 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:34.712 10:14:11 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:06:34.712 00:06:34.712 real 0m13.160s 00:06:34.712 user 0m12.705s 00:06:34.712 sys 0m1.424s 00:06:34.712 10:14:11 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.712 10:14:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.712 ************************************ 00:06:34.712 END TEST skip_rpc 00:06:34.713 ************************************ 00:06:34.713 10:14:11 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.713 10:14:11 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:34.713 10:14:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.713 10:14:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.713 10:14:11 -- common/autotest_common.sh@10 -- # set +x 00:06:34.972 ************************************ 00:06:34.972 START TEST rpc_client 00:06:34.972 ************************************ 00:06:34.972 10:14:11 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:34.972 * Looking for test storage... 00:06:34.972 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:06:34.972 10:14:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:34.972 OK 00:06:34.972 10:14:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:34.972 00:06:34.972 real 0m0.125s 00:06:34.972 user 0m0.058s 00:06:34.972 sys 0m0.075s 00:06:34.972 10:14:12 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.972 10:14:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:34.972 ************************************ 00:06:34.972 END TEST rpc_client 00:06:34.972 ************************************ 00:06:34.972 10:14:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.972 10:14:12 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:34.972 10:14:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.972 10:14:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.972 10:14:12 -- common/autotest_common.sh@10 -- # set +x 00:06:34.972 ************************************ 00:06:34.972 START TEST json_config 00:06:34.972 ************************************ 00:06:34.972 10:14:12 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:35.258 10:14:12 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.258 10:14:12 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.258 10:14:12 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.258 10:14:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.258 10:14:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.258 10:14:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.258 10:14:12 json_config -- paths/export.sh@5 -- # export PATH 00:06:35.258 10:14:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@47 -- # : 0 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:35.258 10:14:12 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:35.258 INFO: JSON configuration test init 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:35.258 10:14:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:35.258 10:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:35.258 10:14:12 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:35.258 10:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.258 10:14:12 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:35.258 10:14:12 json_config -- json_config/common.sh@9 -- # local app=target 00:06:35.258 10:14:12 json_config -- json_config/common.sh@10 -- # shift 00:06:35.258 10:14:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:35.258 10:14:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:35.258 10:14:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:35.258 10:14:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.258 10:14:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.258 10:14:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2725869 00:06:35.258 10:14:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:35.258 Waiting for target to run... 00:06:35.258 10:14:12 json_config -- json_config/common.sh@25 -- # waitforlisten 2725869 /var/tmp/spdk_tgt.sock 00:06:35.258 10:14:12 json_config -- common/autotest_common.sh@829 -- # '[' -z 2725869 ']' 00:06:35.258 10:14:12 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:35.258 10:14:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:35.258 10:14:12 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:35.258 10:14:12 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:35.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:35.258 10:14:12 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:35.258 10:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.258 [2024-07-15 10:14:12.313429] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:35.258 [2024-07-15 10:14:12.313501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725869 ] 00:06:35.258 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.518 [2024-07-15 10:14:12.585871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.518 [2024-07-15 10:14:12.638375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.089 10:14:13 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.089 10:14:13 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:36.089 10:14:13 json_config -- json_config/common.sh@26 -- # echo '' 00:06:36.089 00:06:36.089 10:14:13 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:36.089 10:14:13 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:36.089 10:14:13 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.089 10:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.089 10:14:13 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:36.089 10:14:13 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:36.089 10:14:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.089 10:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.089 10:14:13 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:36.089 10:14:13 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:36.089 10:14:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:36.657 10:14:13 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:36.657 10:14:13 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:36.657 10:14:13 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.657 10:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.657 10:14:13 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:36.657 10:14:13 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:36.657 10:14:13 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:36.657 10:14:13 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:36.657 10:14:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:36.657 10:14:13 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:36.657 10:14:13 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:36.657 10:14:13 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:36.657 10:14:13 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:36.657 10:14:13 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:36.657 10:14:13 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.657 10:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.916 10:14:13 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:36.916 10:14:13 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:36.916 10:14:13 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:36.916 10:14:13 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:36.916 10:14:13 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:36.916 10:14:13 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:36.916 10:14:13 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:36.916 10:14:13 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:36.916 10:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.916 10:14:13 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:36.917 10:14:13 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:06:36.917 10:14:13 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:06:36.917 10:14:13 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:06:36.917 10:14:13 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:36.917 10:14:13 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.917 10:14:13 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:36.917 10:14:13 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:36.917 10:14:13 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:36.917 10:14:13 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.917 10:14:13 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:06:36.917 10:14:13 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.917 10:14:13 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:06:36.917 10:14:13 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:36.917 10:14:13 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:06:36.917 10:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@296 -- # e810=() 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@297 -- # x722=() 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@298 -- # mlx=() 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:06:45.107 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:06:45.107 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:06:45.107 Found net devices under 0000:98:00.0: mlx_0_0 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:06:45.107 Found net devices under 0000:98:00.1: mlx_0_1 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@58 -- # uname 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:45.107 10:14:21 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:06:45.107 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:45.107 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:06:45.107 altname enp152s0f0np0 00:06:45.107 altname ens817f0np0 00:06:45.107 inet 192.168.100.8/24 scope global mlx_0_0 00:06:45.107 valid_lft forever preferred_lft forever 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:06:45.108 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:45.108 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:06:45.108 altname enp152s0f1np1 00:06:45.108 altname ens817f1np1 00:06:45.108 inet 192.168.100.9/24 scope global mlx_0_1 00:06:45.108 valid_lft forever preferred_lft forever 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@422 -- # return 0 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@105 -- # continue 2 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:45.108 192.168.100.9' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:45.108 192.168.100.9' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@457 -- # head -n 1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:45.108 192.168.100.9' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@458 -- # head -n 1 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:45.108 10:14:21 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:45.108 10:14:21 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:06:45.108 10:14:21 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:45.108 10:14:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:45.108 MallocForNvmf0 00:06:45.108 10:14:21 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:45.108 10:14:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:45.108 MallocForNvmf1 00:06:45.108 10:14:22 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:45.108 10:14:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:45.108 [2024-07-15 10:14:22.180780] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:45.108 [2024-07-15 10:14:22.216516] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12fc200/0x1329180) succeed. 00:06:45.108 [2024-07-15 10:14:22.231074] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12fe3f0/0x1389140) succeed. 00:06:45.108 10:14:22 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.108 10:14:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.368 10:14:22 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:45.368 10:14:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:45.628 10:14:22 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:45.628 10:14:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:45.628 10:14:22 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:45.628 10:14:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:45.889 [2024-07-15 10:14:22.938725] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:45.889 10:14:22 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:45.889 10:14:22 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:45.889 10:14:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.889 10:14:23 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:45.889 10:14:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:45.889 10:14:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.889 10:14:23 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:45.889 10:14:23 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:45.889 10:14:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:46.148 MallocBdevForConfigChangeCheck 00:06:46.148 10:14:23 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:46.148 10:14:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:46.148 10:14:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.148 10:14:23 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:46.148 10:14:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:46.407 10:14:23 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:46.407 INFO: shutting down applications... 00:06:46.407 10:14:23 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:46.407 10:14:23 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:46.407 10:14:23 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:46.407 10:14:23 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:46.976 Calling clear_iscsi_subsystem 00:06:46.976 Calling clear_nvmf_subsystem 00:06:46.976 Calling clear_nbd_subsystem 00:06:46.976 Calling clear_ublk_subsystem 00:06:46.976 Calling clear_vhost_blk_subsystem 00:06:46.976 Calling clear_vhost_scsi_subsystem 00:06:46.976 Calling clear_bdev_subsystem 00:06:46.976 10:14:23 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:46.976 10:14:23 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:46.976 10:14:23 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:46.976 10:14:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:46.976 10:14:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:46.976 10:14:23 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:47.236 10:14:24 json_config -- json_config/json_config.sh@345 -- # break 00:06:47.236 10:14:24 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:47.236 10:14:24 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:47.236 10:14:24 json_config -- json_config/common.sh@31 -- # local app=target 00:06:47.236 10:14:24 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:47.236 10:14:24 json_config -- json_config/common.sh@35 -- # [[ -n 2725869 ]] 00:06:47.236 10:14:24 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2725869 00:06:47.236 10:14:24 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:47.236 10:14:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.236 10:14:24 json_config -- json_config/common.sh@41 -- # kill -0 2725869 00:06:47.236 10:14:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.807 10:14:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.807 10:14:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.807 10:14:24 json_config -- json_config/common.sh@41 -- # kill -0 2725869 00:06:47.807 10:14:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:47.807 10:14:24 json_config -- json_config/common.sh@43 -- # break 00:06:47.807 10:14:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:47.807 10:14:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:47.807 SPDK target shutdown done 00:06:47.807 10:14:24 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:47.807 INFO: relaunching applications... 00:06:47.807 10:14:24 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.807 10:14:24 json_config -- json_config/common.sh@9 -- # local app=target 00:06:47.807 10:14:24 json_config -- json_config/common.sh@10 -- # shift 00:06:47.807 10:14:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:47.807 10:14:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:47.807 10:14:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:47.807 10:14:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.807 10:14:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:47.807 10:14:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2731249 00:06:47.807 10:14:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:47.807 Waiting for target to run... 00:06:47.807 10:14:24 json_config -- json_config/common.sh@25 -- # waitforlisten 2731249 /var/tmp/spdk_tgt.sock 00:06:47.807 10:14:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:47.807 10:14:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 2731249 ']' 00:06:47.807 10:14:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:47.807 10:14:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.807 10:14:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:47.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:47.807 10:14:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.807 10:14:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:47.807 [2024-07-15 10:14:24.824053] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:47.807 [2024-07-15 10:14:24.824111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731249 ] 00:06:47.808 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.067 [2024-07-15 10:14:25.107479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.067 [2024-07-15 10:14:25.159428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.636 [2024-07-15 10:14:25.686761] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x137b320/0x11e5800) succeed. 00:06:48.636 [2024-07-15 10:14:25.700396] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x137fd80/0x1265880) succeed. 00:06:48.636 [2024-07-15 10:14:25.755643] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:48.636 10:14:25 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.636 10:14:25 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:48.636 10:14:25 json_config -- json_config/common.sh@26 -- # echo '' 00:06:48.636 00:06:48.636 10:14:25 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:48.636 10:14:25 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:48.636 INFO: Checking if target configuration is the same... 00:06:48.636 10:14:25 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:48.636 10:14:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:48.636 10:14:25 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:48.636 + '[' 2 -ne 2 ']' 00:06:48.636 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:48.636 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:48.636 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:48.636 +++ basename /dev/fd/62 00:06:48.636 ++ mktemp /tmp/62.XXX 00:06:48.636 + tmp_file_1=/tmp/62.JZ7 00:06:48.636 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:48.636 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:48.636 + tmp_file_2=/tmp/spdk_tgt_config.json.Bjm 00:06:48.636 + ret=0 00:06:48.636 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:48.895 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:49.155 + diff -u /tmp/62.JZ7 /tmp/spdk_tgt_config.json.Bjm 00:06:49.155 + echo 'INFO: JSON config files are the same' 00:06:49.155 INFO: JSON config files are the same 00:06:49.155 + rm /tmp/62.JZ7 /tmp/spdk_tgt_config.json.Bjm 00:06:49.155 + exit 0 00:06:49.155 10:14:26 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:49.155 10:14:26 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:49.155 INFO: changing configuration and checking if this can be detected... 00:06:49.155 10:14:26 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:49.155 10:14:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:49.155 10:14:26 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:49.155 10:14:26 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:49.155 10:14:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:49.155 + '[' 2 -ne 2 ']' 00:06:49.155 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:49.155 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:49.155 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:49.155 +++ basename /dev/fd/62 00:06:49.155 ++ mktemp /tmp/62.XXX 00:06:49.155 + tmp_file_1=/tmp/62.ZGZ 00:06:49.155 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:49.155 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:49.155 + tmp_file_2=/tmp/spdk_tgt_config.json.KRD 00:06:49.155 + ret=0 00:06:49.155 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:49.414 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:49.674 + diff -u /tmp/62.ZGZ /tmp/spdk_tgt_config.json.KRD 00:06:49.674 + ret=1 00:06:49.674 + echo '=== Start of file: /tmp/62.ZGZ ===' 00:06:49.674 + cat /tmp/62.ZGZ 00:06:49.674 + echo '=== End of file: /tmp/62.ZGZ ===' 00:06:49.674 + echo '' 00:06:49.674 + echo '=== Start of file: /tmp/spdk_tgt_config.json.KRD ===' 00:06:49.674 + cat /tmp/spdk_tgt_config.json.KRD 00:06:49.674 + echo '=== End of file: /tmp/spdk_tgt_config.json.KRD ===' 00:06:49.674 + echo '' 00:06:49.674 + rm /tmp/62.ZGZ /tmp/spdk_tgt_config.json.KRD 00:06:49.674 + exit 1 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:49.674 INFO: configuration change detected. 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@317 -- # [[ -n 2731249 ]] 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:49.674 10:14:26 json_config -- json_config/json_config.sh@323 -- # killprocess 2731249 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@948 -- # '[' -z 2731249 ']' 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@952 -- # kill -0 2731249 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@953 -- # uname 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2731249 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2731249' 00:06:49.674 killing process with pid 2731249 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@967 -- # kill 2731249 00:06:49.674 10:14:26 json_config -- common/autotest_common.sh@972 -- # wait 2731249 00:06:49.933 10:14:27 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:49.933 10:14:27 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:49.933 10:14:27 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:49.933 10:14:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.194 10:14:27 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:50.194 10:14:27 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:50.194 INFO: Success 00:06:50.194 10:14:27 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:50.194 10:14:27 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:50.194 10:14:27 json_config -- nvmf/common.sh@117 -- # sync 00:06:50.194 10:14:27 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:06:50.194 10:14:27 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:06:50.194 10:14:27 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:50.194 10:14:27 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:50.194 10:14:27 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:06:50.194 00:06:50.194 real 0m15.015s 00:06:50.194 user 0m18.738s 00:06:50.194 sys 0m7.438s 00:06:50.194 10:14:27 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.194 10:14:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.194 ************************************ 00:06:50.194 END TEST json_config 00:06:50.194 ************************************ 00:06:50.194 10:14:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:50.194 10:14:27 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:50.194 10:14:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.194 10:14:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.194 10:14:27 -- common/autotest_common.sh@10 -- # set +x 00:06:50.194 ************************************ 00:06:50.194 START TEST json_config_extra_key 00:06:50.194 ************************************ 00:06:50.194 10:14:27 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:50.194 10:14:27 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.194 10:14:27 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.194 10:14:27 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.194 10:14:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.194 10:14:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.194 10:14:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.194 10:14:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:50.194 10:14:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:50.194 10:14:27 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:50.194 INFO: launching applications... 00:06:50.194 10:14:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:50.194 10:14:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:50.194 10:14:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:50.194 10:14:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:50.194 10:14:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:50.194 10:14:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:50.194 10:14:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:50.194 10:14:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:50.194 10:14:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2731748 00:06:50.194 10:14:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:50.194 Waiting for target to run... 00:06:50.194 10:14:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2731748 /var/tmp/spdk_tgt.sock 00:06:50.194 10:14:27 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 2731748 ']' 00:06:50.194 10:14:27 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:50.194 10:14:27 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:50.194 10:14:27 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.194 10:14:27 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:50.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:50.194 10:14:27 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.194 10:14:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:50.194 [2024-07-15 10:14:27.377394] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:50.194 [2024-07-15 10:14:27.377476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731748 ] 00:06:50.454 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.454 [2024-07-15 10:14:27.648488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.715 [2024-07-15 10:14:27.700187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.974 10:14:28 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.974 10:14:28 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:50.974 10:14:28 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:50.974 00:06:50.974 10:14:28 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:50.974 INFO: shutting down applications... 00:06:50.974 10:14:28 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:50.974 10:14:28 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:50.974 10:14:28 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:50.974 10:14:28 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2731748 ]] 00:06:50.974 10:14:28 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2731748 00:06:50.974 10:14:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:50.974 10:14:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.974 10:14:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2731748 00:06:50.974 10:14:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.546 10:14:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.546 10:14:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.546 10:14:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2731748 00:06:51.546 10:14:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:51.546 10:14:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:51.546 10:14:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:51.546 10:14:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:51.546 SPDK target shutdown done 00:06:51.546 10:14:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:51.546 Success 00:06:51.546 00:06:51.546 real 0m1.434s 00:06:51.546 user 0m1.064s 00:06:51.546 sys 0m0.377s 00:06:51.546 10:14:28 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.546 10:14:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:51.546 ************************************ 00:06:51.546 END TEST json_config_extra_key 00:06:51.546 ************************************ 00:06:51.546 10:14:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.546 10:14:28 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:51.546 10:14:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.546 10:14:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.546 10:14:28 -- common/autotest_common.sh@10 -- # set +x 00:06:51.546 ************************************ 00:06:51.546 START TEST alias_rpc 00:06:51.546 ************************************ 00:06:51.546 10:14:28 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:51.807 * Looking for test storage... 00:06:51.807 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:51.807 10:14:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:51.807 10:14:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2732094 00:06:51.807 10:14:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2732094 00:06:51.807 10:14:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:51.807 10:14:28 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 2732094 ']' 00:06:51.807 10:14:28 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.807 10:14:28 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:51.807 10:14:28 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.807 10:14:28 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:51.807 10:14:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.807 [2024-07-15 10:14:28.890810] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:51.807 [2024-07-15 10:14:28.890885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2732094 ] 00:06:51.807 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.807 [2024-07-15 10:14:28.966180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.068 [2024-07-15 10:14:29.040915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.638 10:14:29 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:52.638 10:14:29 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:52.638 10:14:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:52.898 10:14:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2732094 00:06:52.898 10:14:29 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 2732094 ']' 00:06:52.898 10:14:29 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 2732094 00:06:52.898 10:14:29 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:52.898 10:14:29 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:52.898 10:14:29 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2732094 00:06:52.898 10:14:29 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:52.898 10:14:29 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:52.898 10:14:29 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2732094' 00:06:52.898 killing process with pid 2732094 00:06:52.898 10:14:29 alias_rpc -- common/autotest_common.sh@967 -- # kill 2732094 00:06:52.898 10:14:29 alias_rpc -- common/autotest_common.sh@972 -- # wait 2732094 00:06:53.158 00:06:53.158 real 0m1.405s 00:06:53.158 user 0m1.538s 00:06:53.158 sys 0m0.400s 00:06:53.158 10:14:30 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.158 10:14:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.158 ************************************ 00:06:53.158 END TEST alias_rpc 00:06:53.158 ************************************ 00:06:53.158 10:14:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:53.158 10:14:30 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:53.158 10:14:30 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:53.158 10:14:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:53.158 10:14:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.158 10:14:30 -- common/autotest_common.sh@10 -- # set +x 00:06:53.158 ************************************ 00:06:53.158 START TEST spdkcli_tcp 00:06:53.158 ************************************ 00:06:53.158 10:14:30 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:53.158 * Looking for test storage... 00:06:53.158 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:53.158 10:14:30 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:53.158 10:14:30 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:53.158 10:14:30 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:53.158 10:14:30 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:53.158 10:14:30 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:53.158 10:14:30 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:53.158 10:14:30 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:53.158 10:14:30 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:53.158 10:14:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.158 10:14:30 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2732485 00:06:53.158 10:14:30 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2732485 00:06:53.158 10:14:30 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:53.158 10:14:30 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 2732485 ']' 00:06:53.158 10:14:30 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.158 10:14:30 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.158 10:14:30 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.158 10:14:30 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.158 10:14:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:53.418 [2024-07-15 10:14:30.359751] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:53.418 [2024-07-15 10:14:30.359807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2732485 ] 00:06:53.418 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.418 [2024-07-15 10:14:30.427813] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.418 [2024-07-15 10:14:30.497925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.418 [2024-07-15 10:14:30.497928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.988 10:14:31 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.988 10:14:31 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:53.988 10:14:31 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2732749 00:06:53.988 10:14:31 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:53.988 10:14:31 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:54.251 [ 00:06:54.251 "bdev_malloc_delete", 00:06:54.251 "bdev_malloc_create", 00:06:54.251 "bdev_null_resize", 00:06:54.251 "bdev_null_delete", 00:06:54.251 "bdev_null_create", 00:06:54.251 "bdev_nvme_cuse_unregister", 00:06:54.251 "bdev_nvme_cuse_register", 00:06:54.251 "bdev_opal_new_user", 00:06:54.251 "bdev_opal_set_lock_state", 00:06:54.251 "bdev_opal_delete", 00:06:54.251 "bdev_opal_get_info", 00:06:54.251 "bdev_opal_create", 00:06:54.251 "bdev_nvme_opal_revert", 00:06:54.251 "bdev_nvme_opal_init", 00:06:54.251 "bdev_nvme_send_cmd", 00:06:54.251 "bdev_nvme_get_path_iostat", 00:06:54.251 "bdev_nvme_get_mdns_discovery_info", 00:06:54.251 "bdev_nvme_stop_mdns_discovery", 00:06:54.251 "bdev_nvme_start_mdns_discovery", 00:06:54.251 "bdev_nvme_set_multipath_policy", 00:06:54.251 "bdev_nvme_set_preferred_path", 00:06:54.251 "bdev_nvme_get_io_paths", 00:06:54.251 "bdev_nvme_remove_error_injection", 00:06:54.251 "bdev_nvme_add_error_injection", 00:06:54.251 "bdev_nvme_get_discovery_info", 00:06:54.251 "bdev_nvme_stop_discovery", 00:06:54.251 "bdev_nvme_start_discovery", 00:06:54.251 "bdev_nvme_get_controller_health_info", 00:06:54.251 "bdev_nvme_disable_controller", 00:06:54.251 "bdev_nvme_enable_controller", 00:06:54.251 "bdev_nvme_reset_controller", 00:06:54.251 "bdev_nvme_get_transport_statistics", 00:06:54.251 "bdev_nvme_apply_firmware", 00:06:54.251 "bdev_nvme_detach_controller", 00:06:54.251 "bdev_nvme_get_controllers", 00:06:54.251 "bdev_nvme_attach_controller", 00:06:54.251 "bdev_nvme_set_hotplug", 00:06:54.251 "bdev_nvme_set_options", 00:06:54.251 "bdev_passthru_delete", 00:06:54.251 "bdev_passthru_create", 00:06:54.251 "bdev_lvol_set_parent_bdev", 00:06:54.251 "bdev_lvol_set_parent", 00:06:54.251 "bdev_lvol_check_shallow_copy", 00:06:54.251 "bdev_lvol_start_shallow_copy", 00:06:54.251 "bdev_lvol_grow_lvstore", 00:06:54.251 "bdev_lvol_get_lvols", 00:06:54.251 "bdev_lvol_get_lvstores", 00:06:54.251 "bdev_lvol_delete", 00:06:54.251 "bdev_lvol_set_read_only", 00:06:54.251 "bdev_lvol_resize", 00:06:54.251 "bdev_lvol_decouple_parent", 00:06:54.251 "bdev_lvol_inflate", 00:06:54.251 "bdev_lvol_rename", 00:06:54.251 "bdev_lvol_clone_bdev", 00:06:54.251 "bdev_lvol_clone", 00:06:54.251 "bdev_lvol_snapshot", 00:06:54.251 "bdev_lvol_create", 00:06:54.251 "bdev_lvol_delete_lvstore", 00:06:54.251 "bdev_lvol_rename_lvstore", 00:06:54.251 "bdev_lvol_create_lvstore", 00:06:54.251 "bdev_raid_set_options", 00:06:54.251 "bdev_raid_remove_base_bdev", 00:06:54.251 "bdev_raid_add_base_bdev", 00:06:54.251 "bdev_raid_delete", 00:06:54.251 "bdev_raid_create", 00:06:54.251 "bdev_raid_get_bdevs", 00:06:54.251 "bdev_error_inject_error", 00:06:54.251 "bdev_error_delete", 00:06:54.251 "bdev_error_create", 00:06:54.251 "bdev_split_delete", 00:06:54.251 "bdev_split_create", 00:06:54.251 "bdev_delay_delete", 00:06:54.251 "bdev_delay_create", 00:06:54.251 "bdev_delay_update_latency", 00:06:54.251 "bdev_zone_block_delete", 00:06:54.251 "bdev_zone_block_create", 00:06:54.251 "blobfs_create", 00:06:54.251 "blobfs_detect", 00:06:54.251 "blobfs_set_cache_size", 00:06:54.251 "bdev_aio_delete", 00:06:54.251 "bdev_aio_rescan", 00:06:54.251 "bdev_aio_create", 00:06:54.251 "bdev_ftl_set_property", 00:06:54.251 "bdev_ftl_get_properties", 00:06:54.251 "bdev_ftl_get_stats", 00:06:54.251 "bdev_ftl_unmap", 00:06:54.251 "bdev_ftl_unload", 00:06:54.251 "bdev_ftl_delete", 00:06:54.251 "bdev_ftl_load", 00:06:54.251 "bdev_ftl_create", 00:06:54.251 "bdev_virtio_attach_controller", 00:06:54.251 "bdev_virtio_scsi_get_devices", 00:06:54.251 "bdev_virtio_detach_controller", 00:06:54.251 "bdev_virtio_blk_set_hotplug", 00:06:54.251 "bdev_iscsi_delete", 00:06:54.251 "bdev_iscsi_create", 00:06:54.251 "bdev_iscsi_set_options", 00:06:54.251 "accel_error_inject_error", 00:06:54.251 "ioat_scan_accel_module", 00:06:54.251 "dsa_scan_accel_module", 00:06:54.251 "iaa_scan_accel_module", 00:06:54.251 "keyring_file_remove_key", 00:06:54.251 "keyring_file_add_key", 00:06:54.251 "keyring_linux_set_options", 00:06:54.251 "iscsi_get_histogram", 00:06:54.251 "iscsi_enable_histogram", 00:06:54.251 "iscsi_set_options", 00:06:54.251 "iscsi_get_auth_groups", 00:06:54.251 "iscsi_auth_group_remove_secret", 00:06:54.251 "iscsi_auth_group_add_secret", 00:06:54.251 "iscsi_delete_auth_group", 00:06:54.251 "iscsi_create_auth_group", 00:06:54.251 "iscsi_set_discovery_auth", 00:06:54.251 "iscsi_get_options", 00:06:54.251 "iscsi_target_node_request_logout", 00:06:54.251 "iscsi_target_node_set_redirect", 00:06:54.251 "iscsi_target_node_set_auth", 00:06:54.251 "iscsi_target_node_add_lun", 00:06:54.251 "iscsi_get_stats", 00:06:54.251 "iscsi_get_connections", 00:06:54.251 "iscsi_portal_group_set_auth", 00:06:54.251 "iscsi_start_portal_group", 00:06:54.251 "iscsi_delete_portal_group", 00:06:54.251 "iscsi_create_portal_group", 00:06:54.251 "iscsi_get_portal_groups", 00:06:54.251 "iscsi_delete_target_node", 00:06:54.251 "iscsi_target_node_remove_pg_ig_maps", 00:06:54.251 "iscsi_target_node_add_pg_ig_maps", 00:06:54.251 "iscsi_create_target_node", 00:06:54.251 "iscsi_get_target_nodes", 00:06:54.251 "iscsi_delete_initiator_group", 00:06:54.251 "iscsi_initiator_group_remove_initiators", 00:06:54.251 "iscsi_initiator_group_add_initiators", 00:06:54.251 "iscsi_create_initiator_group", 00:06:54.251 "iscsi_get_initiator_groups", 00:06:54.251 "nvmf_set_crdt", 00:06:54.251 "nvmf_set_config", 00:06:54.251 "nvmf_set_max_subsystems", 00:06:54.251 "nvmf_stop_mdns_prr", 00:06:54.251 "nvmf_publish_mdns_prr", 00:06:54.251 "nvmf_subsystem_get_listeners", 00:06:54.251 "nvmf_subsystem_get_qpairs", 00:06:54.251 "nvmf_subsystem_get_controllers", 00:06:54.251 "nvmf_get_stats", 00:06:54.251 "nvmf_get_transports", 00:06:54.251 "nvmf_create_transport", 00:06:54.251 "nvmf_get_targets", 00:06:54.251 "nvmf_delete_target", 00:06:54.251 "nvmf_create_target", 00:06:54.251 "nvmf_subsystem_allow_any_host", 00:06:54.251 "nvmf_subsystem_remove_host", 00:06:54.251 "nvmf_subsystem_add_host", 00:06:54.251 "nvmf_ns_remove_host", 00:06:54.251 "nvmf_ns_add_host", 00:06:54.251 "nvmf_subsystem_remove_ns", 00:06:54.251 "nvmf_subsystem_add_ns", 00:06:54.251 "nvmf_subsystem_listener_set_ana_state", 00:06:54.251 "nvmf_discovery_get_referrals", 00:06:54.251 "nvmf_discovery_remove_referral", 00:06:54.251 "nvmf_discovery_add_referral", 00:06:54.251 "nvmf_subsystem_remove_listener", 00:06:54.251 "nvmf_subsystem_add_listener", 00:06:54.251 "nvmf_delete_subsystem", 00:06:54.251 "nvmf_create_subsystem", 00:06:54.251 "nvmf_get_subsystems", 00:06:54.251 "env_dpdk_get_mem_stats", 00:06:54.251 "nbd_get_disks", 00:06:54.251 "nbd_stop_disk", 00:06:54.251 "nbd_start_disk", 00:06:54.251 "ublk_recover_disk", 00:06:54.251 "ublk_get_disks", 00:06:54.251 "ublk_stop_disk", 00:06:54.251 "ublk_start_disk", 00:06:54.251 "ublk_destroy_target", 00:06:54.251 "ublk_create_target", 00:06:54.251 "virtio_blk_create_transport", 00:06:54.251 "virtio_blk_get_transports", 00:06:54.251 "vhost_controller_set_coalescing", 00:06:54.251 "vhost_get_controllers", 00:06:54.251 "vhost_delete_controller", 00:06:54.251 "vhost_create_blk_controller", 00:06:54.251 "vhost_scsi_controller_remove_target", 00:06:54.251 "vhost_scsi_controller_add_target", 00:06:54.251 "vhost_start_scsi_controller", 00:06:54.251 "vhost_create_scsi_controller", 00:06:54.251 "thread_set_cpumask", 00:06:54.251 "framework_get_governor", 00:06:54.251 "framework_get_scheduler", 00:06:54.251 "framework_set_scheduler", 00:06:54.251 "framework_get_reactors", 00:06:54.251 "thread_get_io_channels", 00:06:54.251 "thread_get_pollers", 00:06:54.251 "thread_get_stats", 00:06:54.251 "framework_monitor_context_switch", 00:06:54.251 "spdk_kill_instance", 00:06:54.251 "log_enable_timestamps", 00:06:54.251 "log_get_flags", 00:06:54.251 "log_clear_flag", 00:06:54.251 "log_set_flag", 00:06:54.251 "log_get_level", 00:06:54.251 "log_set_level", 00:06:54.251 "log_get_print_level", 00:06:54.251 "log_set_print_level", 00:06:54.251 "framework_enable_cpumask_locks", 00:06:54.251 "framework_disable_cpumask_locks", 00:06:54.251 "framework_wait_init", 00:06:54.251 "framework_start_init", 00:06:54.251 "scsi_get_devices", 00:06:54.251 "bdev_get_histogram", 00:06:54.251 "bdev_enable_histogram", 00:06:54.251 "bdev_set_qos_limit", 00:06:54.251 "bdev_set_qd_sampling_period", 00:06:54.251 "bdev_get_bdevs", 00:06:54.251 "bdev_reset_iostat", 00:06:54.251 "bdev_get_iostat", 00:06:54.251 "bdev_examine", 00:06:54.251 "bdev_wait_for_examine", 00:06:54.251 "bdev_set_options", 00:06:54.251 "notify_get_notifications", 00:06:54.251 "notify_get_types", 00:06:54.251 "accel_get_stats", 00:06:54.251 "accel_set_options", 00:06:54.251 "accel_set_driver", 00:06:54.251 "accel_crypto_key_destroy", 00:06:54.251 "accel_crypto_keys_get", 00:06:54.251 "accel_crypto_key_create", 00:06:54.251 "accel_assign_opc", 00:06:54.251 "accel_get_module_info", 00:06:54.251 "accel_get_opc_assignments", 00:06:54.251 "vmd_rescan", 00:06:54.251 "vmd_remove_device", 00:06:54.251 "vmd_enable", 00:06:54.251 "sock_get_default_impl", 00:06:54.251 "sock_set_default_impl", 00:06:54.251 "sock_impl_set_options", 00:06:54.251 "sock_impl_get_options", 00:06:54.251 "iobuf_get_stats", 00:06:54.251 "iobuf_set_options", 00:06:54.251 "framework_get_pci_devices", 00:06:54.251 "framework_get_config", 00:06:54.251 "framework_get_subsystems", 00:06:54.251 "trace_get_info", 00:06:54.251 "trace_get_tpoint_group_mask", 00:06:54.251 "trace_disable_tpoint_group", 00:06:54.251 "trace_enable_tpoint_group", 00:06:54.251 "trace_clear_tpoint_mask", 00:06:54.251 "trace_set_tpoint_mask", 00:06:54.251 "keyring_get_keys", 00:06:54.251 "spdk_get_version", 00:06:54.251 "rpc_get_methods" 00:06:54.251 ] 00:06:54.251 10:14:31 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.251 10:14:31 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:54.251 10:14:31 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2732485 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 2732485 ']' 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 2732485 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2732485 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2732485' 00:06:54.251 killing process with pid 2732485 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 2732485 00:06:54.251 10:14:31 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 2732485 00:06:54.540 00:06:54.540 real 0m1.379s 00:06:54.540 user 0m2.516s 00:06:54.540 sys 0m0.418s 00:06:54.540 10:14:31 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.540 10:14:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.540 ************************************ 00:06:54.540 END TEST spdkcli_tcp 00:06:54.540 ************************************ 00:06:54.540 10:14:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:54.540 10:14:31 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:54.540 10:14:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.540 10:14:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.540 10:14:31 -- common/autotest_common.sh@10 -- # set +x 00:06:54.540 ************************************ 00:06:54.540 START TEST dpdk_mem_utility 00:06:54.540 ************************************ 00:06:54.540 10:14:31 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:54.846 * Looking for test storage... 00:06:54.846 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:54.846 10:14:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:54.846 10:14:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2732888 00:06:54.846 10:14:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2732888 00:06:54.846 10:14:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:54.846 10:14:31 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 2732888 ']' 00:06:54.846 10:14:31 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.846 10:14:31 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.846 10:14:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.846 10:14:31 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.846 10:14:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:54.846 [2024-07-15 10:14:31.802678] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:54.846 [2024-07-15 10:14:31.802740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2732888 ] 00:06:54.846 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.846 [2024-07-15 10:14:31.873361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.846 [2024-07-15 10:14:31.947064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.416 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.416 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:55.416 10:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:55.416 10:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:55.416 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.416 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:55.416 { 00:06:55.416 "filename": "/tmp/spdk_mem_dump.txt" 00:06:55.416 } 00:06:55.416 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.416 10:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:55.676 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:55.676 1 heaps totaling size 814.000000 MiB 00:06:55.676 size: 814.000000 MiB heap id: 0 00:06:55.676 end heaps---------- 00:06:55.676 8 mempools totaling size 598.116089 MiB 00:06:55.676 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:55.676 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:55.676 size: 84.521057 MiB name: bdev_io_2732888 00:06:55.676 size: 51.011292 MiB name: evtpool_2732888 00:06:55.676 size: 50.003479 MiB name: msgpool_2732888 00:06:55.676 size: 21.763794 MiB name: PDU_Pool 00:06:55.676 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:55.676 size: 0.026123 MiB name: Session_Pool 00:06:55.676 end mempools------- 00:06:55.676 6 memzones totaling size 4.142822 MiB 00:06:55.676 size: 1.000366 MiB name: RG_ring_0_2732888 00:06:55.676 size: 1.000366 MiB name: RG_ring_1_2732888 00:06:55.676 size: 1.000366 MiB name: RG_ring_4_2732888 00:06:55.676 size: 1.000366 MiB name: RG_ring_5_2732888 00:06:55.676 size: 0.125366 MiB name: RG_ring_2_2732888 00:06:55.676 size: 0.015991 MiB name: RG_ring_3_2732888 00:06:55.676 end memzones------- 00:06:55.676 10:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:55.676 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:55.676 list of free elements. size: 12.519348 MiB 00:06:55.676 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:55.676 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:55.676 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:55.676 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:55.676 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:55.676 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:55.676 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:55.676 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:55.676 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:55.676 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:55.676 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:55.676 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:55.677 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:55.677 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:55.677 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:55.677 list of standard malloc elements. size: 199.218079 MiB 00:06:55.677 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:55.677 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:55.677 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:55.677 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:55.677 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:55.677 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:55.677 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:55.677 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:55.677 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:55.677 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:55.677 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:55.677 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:55.677 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:55.677 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:55.677 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:55.677 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:55.677 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:55.677 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:55.677 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:55.677 list of memzone associated elements. size: 602.262573 MiB 00:06:55.677 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:55.677 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:55.677 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:55.677 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:55.677 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:55.677 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2732888_0 00:06:55.677 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:55.677 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2732888_0 00:06:55.677 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:55.677 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2732888_0 00:06:55.677 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:55.677 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:55.677 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:55.677 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:55.677 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:55.677 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2732888 00:06:55.677 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:55.677 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2732888 00:06:55.677 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:55.677 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2732888 00:06:55.677 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:55.677 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:55.677 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:55.677 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:55.677 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:55.677 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:55.677 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:55.677 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:55.677 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:55.677 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2732888 00:06:55.677 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:55.677 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2732888 00:06:55.677 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:55.677 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2732888 00:06:55.677 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:55.677 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2732888 00:06:55.677 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:55.677 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2732888 00:06:55.677 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:55.677 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:55.677 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:55.677 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:55.677 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:55.677 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:55.677 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:55.677 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2732888 00:06:55.677 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:55.677 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:55.677 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:55.677 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:55.677 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:55.677 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2732888 00:06:55.677 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:55.677 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:55.677 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:55.677 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2732888 00:06:55.677 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:55.677 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2732888 00:06:55.677 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:55.677 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:55.677 10:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:55.677 10:14:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2732888 00:06:55.677 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 2732888 ']' 00:06:55.677 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 2732888 00:06:55.677 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:55.677 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.677 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2732888 00:06:55.677 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.678 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.678 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2732888' 00:06:55.678 killing process with pid 2732888 00:06:55.678 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 2732888 00:06:55.678 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 2732888 00:06:55.938 00:06:55.938 real 0m1.275s 00:06:55.938 user 0m1.334s 00:06:55.938 sys 0m0.369s 00:06:55.938 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.938 10:14:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 ************************************ 00:06:55.938 END TEST dpdk_mem_utility 00:06:55.938 ************************************ 00:06:55.938 10:14:32 -- common/autotest_common.sh@1142 -- # return 0 00:06:55.938 10:14:32 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:55.938 10:14:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:55.938 10:14:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.938 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 ************************************ 00:06:55.938 START TEST event 00:06:55.938 ************************************ 00:06:55.938 10:14:32 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:55.938 * Looking for test storage... 00:06:55.938 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:55.938 10:14:33 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:55.938 10:14:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:55.938 10:14:33 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:55.938 10:14:33 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:55.938 10:14:33 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.938 10:14:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.938 ************************************ 00:06:55.938 START TEST event_perf 00:06:55.938 ************************************ 00:06:55.938 10:14:33 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:56.207 Running I/O for 1 seconds...[2024-07-15 10:14:33.147019] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:56.207 [2024-07-15 10:14:33.147119] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733279 ] 00:06:56.207 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.207 [2024-07-15 10:14:33.217357] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.207 [2024-07-15 10:14:33.285522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.207 [2024-07-15 10:14:33.285636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.207 [2024-07-15 10:14:33.285791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.207 Running I/O for 1 seconds...[2024-07-15 10:14:33.285792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.149 00:06:57.149 lcore 0: 176997 00:06:57.149 lcore 1: 176992 00:06:57.149 lcore 2: 176994 00:06:57.149 lcore 3: 176997 00:06:57.149 done. 00:06:57.149 00:06:57.149 real 0m1.212s 00:06:57.149 user 0m4.135s 00:06:57.149 sys 0m0.074s 00:06:57.149 10:14:34 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.149 10:14:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.149 ************************************ 00:06:57.149 END TEST event_perf 00:06:57.149 ************************************ 00:06:57.409 10:14:34 event -- common/autotest_common.sh@1142 -- # return 0 00:06:57.409 10:14:34 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:57.409 10:14:34 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:57.409 10:14:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.409 10:14:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.409 ************************************ 00:06:57.409 START TEST event_reactor 00:06:57.409 ************************************ 00:06:57.409 10:14:34 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:57.409 [2024-07-15 10:14:34.430662] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:57.409 [2024-07-15 10:14:34.430763] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733525 ] 00:06:57.409 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.409 [2024-07-15 10:14:34.501340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.409 [2024-07-15 10:14:34.568914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.789 test_start 00:06:58.789 oneshot 00:06:58.789 tick 100 00:06:58.789 tick 100 00:06:58.789 tick 250 00:06:58.789 tick 100 00:06:58.789 tick 100 00:06:58.789 tick 100 00:06:58.789 tick 250 00:06:58.789 tick 500 00:06:58.789 tick 100 00:06:58.789 tick 100 00:06:58.789 tick 250 00:06:58.789 tick 100 00:06:58.789 tick 100 00:06:58.789 test_end 00:06:58.789 00:06:58.789 real 0m1.211s 00:06:58.789 user 0m1.131s 00:06:58.789 sys 0m0.076s 00:06:58.789 10:14:35 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.789 10:14:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:58.789 ************************************ 00:06:58.789 END TEST event_reactor 00:06:58.789 ************************************ 00:06:58.789 10:14:35 event -- common/autotest_common.sh@1142 -- # return 0 00:06:58.789 10:14:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:58.789 10:14:35 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:58.789 10:14:35 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.789 10:14:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.789 ************************************ 00:06:58.789 START TEST event_reactor_perf 00:06:58.789 ************************************ 00:06:58.789 10:14:35 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:58.789 [2024-07-15 10:14:35.708588] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:58.789 [2024-07-15 10:14:35.708690] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2733688 ] 00:06:58.789 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.789 [2024-07-15 10:14:35.778690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.789 [2024-07-15 10:14:35.845694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.729 test_start 00:06:59.729 test_end 00:06:59.729 Performance: 369908 events per second 00:06:59.729 00:06:59.729 real 0m1.210s 00:06:59.729 user 0m1.131s 00:06:59.729 sys 0m0.074s 00:06:59.729 10:14:36 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.729 10:14:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.729 ************************************ 00:06:59.729 END TEST event_reactor_perf 00:06:59.729 ************************************ 00:06:59.989 10:14:36 event -- common/autotest_common.sh@1142 -- # return 0 00:06:59.989 10:14:36 event -- event/event.sh@49 -- # uname -s 00:06:59.989 10:14:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:59.989 10:14:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:59.989 10:14:36 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:59.989 10:14:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.989 10:14:36 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.989 ************************************ 00:06:59.989 START TEST event_scheduler 00:06:59.989 ************************************ 00:06:59.989 10:14:36 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:59.989 * Looking for test storage... 00:06:59.989 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:59.989 10:14:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:59.989 10:14:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2734051 00:06:59.989 10:14:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.989 10:14:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:59.989 10:14:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2734051 00:06:59.989 10:14:37 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 2734051 ']' 00:06:59.989 10:14:37 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.989 10:14:37 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.989 10:14:37 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.989 10:14:37 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.989 10:14:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:59.989 [2024-07-15 10:14:37.122220] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:59.989 [2024-07-15 10:14:37.122300] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2734051 ] 00:06:59.989 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.249 [2024-07-15 10:14:37.186773] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.249 [2024-07-15 10:14:37.250821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.249 [2024-07-15 10:14:37.250978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.249 [2024-07-15 10:14:37.251133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.249 [2024-07-15 10:14:37.251133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.821 10:14:37 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.821 10:14:37 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:07:00.821 10:14:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:00.821 10:14:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.821 10:14:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.821 [2024-07-15 10:14:37.917191] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:00.821 [2024-07-15 10:14:37.917204] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:07:00.821 [2024-07-15 10:14:37.917212] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:00.821 [2024-07-15 10:14:37.917216] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:00.821 [2024-07-15 10:14:37.917220] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:00.821 10:14:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.821 10:14:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:00.821 10:14:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.821 10:14:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.821 [2024-07-15 10:14:37.975828] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:00.821 10:14:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.821 10:14:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:00.821 10:14:37 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:00.821 10:14:37 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.821 10:14:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.821 ************************************ 00:07:00.821 START TEST scheduler_create_thread 00:07:00.821 ************************************ 00:07:00.821 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:07:00.821 10:14:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:00.821 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.821 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.082 2 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.082 3 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.082 4 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.082 5 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.082 6 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.082 7 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.082 8 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.082 10:14:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:01.083 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.083 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.083 9 00:07:01.083 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.083 10:14:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:01.083 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.083 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.653 10 00:07:01.653 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.653 10:14:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:01.653 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.653 10:14:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.036 10:14:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.036 10:14:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:03.037 10:14:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:03.037 10:14:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.037 10:14:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.607 10:14:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:03.607 10:14:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:03.607 10:14:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:03.607 10:14:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.548 10:14:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.548 10:14:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:04.548 10:14:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:04.548 10:14:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.548 10:14:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.118 10:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.118 00:07:05.118 real 0m4.225s 00:07:05.118 user 0m0.026s 00:07:05.118 sys 0m0.004s 00:07:05.118 10:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.118 10:14:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.118 ************************************ 00:07:05.118 END TEST scheduler_create_thread 00:07:05.118 ************************************ 00:07:05.118 10:14:42 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:07:05.118 10:14:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:05.118 10:14:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2734051 00:07:05.118 10:14:42 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 2734051 ']' 00:07:05.118 10:14:42 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 2734051 00:07:05.118 10:14:42 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:07:05.118 10:14:42 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.118 10:14:42 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2734051 00:07:05.379 10:14:42 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:05.379 10:14:42 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:05.379 10:14:42 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2734051' 00:07:05.379 killing process with pid 2734051 00:07:05.379 10:14:42 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 2734051 00:07:05.379 10:14:42 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 2734051 00:07:05.640 [2024-07-15 10:14:42.617326] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:05.640 00:07:05.640 real 0m5.816s 00:07:05.640 user 0m13.703s 00:07:05.640 sys 0m0.370s 00:07:05.640 10:14:42 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.640 10:14:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.640 ************************************ 00:07:05.640 END TEST event_scheduler 00:07:05.640 ************************************ 00:07:05.640 10:14:42 event -- common/autotest_common.sh@1142 -- # return 0 00:07:05.640 10:14:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:05.640 10:14:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:05.640 10:14:42 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.640 10:14:42 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.640 10:14:42 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.900 ************************************ 00:07:05.900 START TEST app_repeat 00:07:05.900 ************************************ 00:07:05.900 10:14:42 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2735308 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2735308' 00:07:05.900 Process app_repeat pid: 2735308 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:05.900 spdk_app_start Round 0 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2735308 /var/tmp/spdk-nbd.sock 00:07:05.900 10:14:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2735308 ']' 00:07:05.900 10:14:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.900 10:14:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.900 10:14:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.900 10:14:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.900 10:14:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.900 10:14:42 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:05.900 [2024-07-15 10:14:42.900613] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:05.900 [2024-07-15 10:14:42.900670] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2735308 ] 00:07:05.900 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.900 [2024-07-15 10:14:42.967472] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.900 [2024-07-15 10:14:43.033895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.900 [2024-07-15 10:14:43.033898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.845 10:14:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.845 10:14:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:06.845 10:14:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.845 Malloc0 00:07:06.845 10:14:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.845 Malloc1 00:07:06.845 10:14:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.845 10:14:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:07.108 /dev/nbd0 00:07:07.108 10:14:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.108 10:14:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.108 1+0 records in 00:07:07.108 1+0 records out 00:07:07.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293059 s, 14.0 MB/s 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:07.108 10:14:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:07.108 10:14:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.108 10:14:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.108 10:14:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:07.369 /dev/nbd1 00:07:07.369 10:14:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.369 10:14:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.369 1+0 records in 00:07:07.369 1+0 records out 00:07:07.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278472 s, 14.7 MB/s 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:07.369 10:14:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:07.369 10:14:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.369 10:14:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.369 10:14:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.369 10:14:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.369 10:14:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.369 10:14:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.369 { 00:07:07.369 "nbd_device": "/dev/nbd0", 00:07:07.369 "bdev_name": "Malloc0" 00:07:07.369 }, 00:07:07.369 { 00:07:07.369 "nbd_device": "/dev/nbd1", 00:07:07.369 "bdev_name": "Malloc1" 00:07:07.369 } 00:07:07.369 ]' 00:07:07.369 10:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.369 { 00:07:07.369 "nbd_device": "/dev/nbd0", 00:07:07.369 "bdev_name": "Malloc0" 00:07:07.369 }, 00:07:07.369 { 00:07:07.369 "nbd_device": "/dev/nbd1", 00:07:07.369 "bdev_name": "Malloc1" 00:07:07.369 } 00:07:07.369 ]' 00:07:07.369 10:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:07.630 /dev/nbd1' 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:07.630 /dev/nbd1' 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:07.630 256+0 records in 00:07:07.630 256+0 records out 00:07:07.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113197 s, 92.6 MB/s 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:07.630 256+0 records in 00:07:07.630 256+0 records out 00:07:07.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151389 s, 69.3 MB/s 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:07.630 256+0 records in 00:07:07.630 256+0 records out 00:07:07.630 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015994 s, 65.6 MB/s 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.630 10:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.890 10:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.891 10:14:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.151 10:14:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:08.151 10:14:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:08.151 10:14:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.151 10:14:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.151 10:14:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.151 10:14:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.152 10:14:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:08.152 10:14:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.152 10:14:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.152 10:14:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:08.152 10:14:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:08.152 10:14:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:08.152 10:14:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:08.413 10:14:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:08.413 [2024-07-15 10:14:45.496340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.413 [2024-07-15 10:14:45.559740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.413 [2024-07-15 10:14:45.559744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.413 [2024-07-15 10:14:45.591195] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:08.413 [2024-07-15 10:14:45.591228] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:11.711 10:14:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:11.711 10:14:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:11.711 spdk_app_start Round 1 00:07:11.711 10:14:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2735308 /var/tmp/spdk-nbd.sock 00:07:11.711 10:14:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2735308 ']' 00:07:11.711 10:14:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.711 10:14:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.711 10:14:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.711 10:14:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.711 10:14:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.711 10:14:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.711 10:14:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:11.711 10:14:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.711 Malloc0 00:07:11.711 10:14:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.711 Malloc1 00:07:11.711 10:14:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.711 10:14:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.971 /dev/nbd0 00:07:11.971 10:14:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.971 10:14:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.971 10:14:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:11.971 10:14:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:11.971 10:14:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:11.971 10:14:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:11.971 10:14:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:11.971 10:14:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:11.971 10:14:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:11.971 10:14:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:11.971 10:14:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.971 1+0 records in 00:07:11.971 1+0 records out 00:07:11.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277575 s, 14.8 MB/s 00:07:11.971 10:14:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:11.971 10:14:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:11.971 10:14:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:11.971 10:14:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:11.971 10:14:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:11.971 10:14:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.971 10:14:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.971 10:14:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:12.231 /dev/nbd1 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.231 1+0 records in 00:07:12.231 1+0 records out 00:07:12.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274232 s, 14.9 MB/s 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:12.231 10:14:49 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.231 { 00:07:12.231 "nbd_device": "/dev/nbd0", 00:07:12.231 "bdev_name": "Malloc0" 00:07:12.231 }, 00:07:12.231 { 00:07:12.231 "nbd_device": "/dev/nbd1", 00:07:12.231 "bdev_name": "Malloc1" 00:07:12.231 } 00:07:12.231 ]' 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.231 { 00:07:12.231 "nbd_device": "/dev/nbd0", 00:07:12.231 "bdev_name": "Malloc0" 00:07:12.231 }, 00:07:12.231 { 00:07:12.231 "nbd_device": "/dev/nbd1", 00:07:12.231 "bdev_name": "Malloc1" 00:07:12.231 } 00:07:12.231 ]' 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.231 /dev/nbd1' 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.231 /dev/nbd1' 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.231 10:14:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.490 256+0 records in 00:07:12.490 256+0 records out 00:07:12.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124424 s, 84.3 MB/s 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.490 256+0 records in 00:07:12.490 256+0 records out 00:07:12.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015669 s, 66.9 MB/s 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.490 256+0 records in 00:07:12.490 256+0 records out 00:07:12.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169855 s, 61.7 MB/s 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.490 10:14:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.750 10:14:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.750 10:14:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.750 10:14:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.750 10:14:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.750 10:14:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.750 10:14:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.750 10:14:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.750 10:14:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.750 10:14:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.750 10:14:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.750 10:14:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.010 10:14:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.010 10:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.010 10:14:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.010 10:14:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.010 10:14:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.010 10:14:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.010 10:14:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:13.010 10:14:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.010 10:14:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.010 10:14:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.010 10:14:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.010 10:14:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.010 10:14:50 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.010 10:14:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:13.270 [2024-07-15 10:14:50.334955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:13.270 [2024-07-15 10:14:50.397787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.270 [2024-07-15 10:14:50.397792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.270 [2024-07-15 10:14:50.429950] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:13.270 [2024-07-15 10:14:50.429986] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.561 10:14:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:16.561 10:14:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:16.561 spdk_app_start Round 2 00:07:16.561 10:14:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2735308 /var/tmp/spdk-nbd.sock 00:07:16.561 10:14:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2735308 ']' 00:07:16.561 10:14:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.561 10:14:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.561 10:14:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.561 10:14:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.561 10:14:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.561 10:14:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.561 10:14:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:16.561 10:14:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:16.561 Malloc0 00:07:16.561 10:14:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:16.561 Malloc1 00:07:16.561 10:14:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:16.561 10:14:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.561 10:14:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:16.561 10:14:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:16.561 10:14:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.561 10:14:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:16.561 10:14:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:16.561 10:14:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.561 10:14:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:16.562 10:14:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:16.562 10:14:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.562 10:14:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:16.562 10:14:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:16.562 10:14:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:16.562 10:14:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.562 10:14:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:16.821 /dev/nbd0 00:07:16.822 10:14:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:16.822 10:14:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.822 1+0 records in 00:07:16.822 1+0 records out 00:07:16.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280062 s, 14.6 MB/s 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:16.822 10:14:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:16.822 10:14:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.822 10:14:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.822 10:14:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:17.082 /dev/nbd1 00:07:17.082 10:14:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.082 10:14:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.082 1+0 records in 00:07:17.082 1+0 records out 00:07:17.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205247 s, 20.0 MB/s 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:17.082 10:14:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:07:17.082 10:14:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.082 10:14:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.082 10:14:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.082 10:14:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.082 10:14:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.082 10:14:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.082 { 00:07:17.082 "nbd_device": "/dev/nbd0", 00:07:17.082 "bdev_name": "Malloc0" 00:07:17.082 }, 00:07:17.082 { 00:07:17.082 "nbd_device": "/dev/nbd1", 00:07:17.082 "bdev_name": "Malloc1" 00:07:17.082 } 00:07:17.082 ]' 00:07:17.082 10:14:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.082 { 00:07:17.082 "nbd_device": "/dev/nbd0", 00:07:17.082 "bdev_name": "Malloc0" 00:07:17.082 }, 00:07:17.082 { 00:07:17.082 "nbd_device": "/dev/nbd1", 00:07:17.082 "bdev_name": "Malloc1" 00:07:17.082 } 00:07:17.082 ]' 00:07:17.082 10:14:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.343 /dev/nbd1' 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.343 /dev/nbd1' 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:17.343 256+0 records in 00:07:17.343 256+0 records out 00:07:17.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124537 s, 84.2 MB/s 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:17.343 256+0 records in 00:07:17.343 256+0 records out 00:07:17.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159418 s, 65.8 MB/s 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:17.343 256+0 records in 00:07:17.343 256+0 records out 00:07:17.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166312 s, 63.0 MB/s 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.343 10:14:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.604 10:14:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.604 10:14:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.604 10:14:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.604 10:14:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.604 10:14:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.604 10:14:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.604 10:14:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:17.604 10:14:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.604 10:14:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.604 10:14:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.604 10:14:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:17.864 10:14:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:17.864 10:14:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:18.124 10:14:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:18.124 [2024-07-15 10:14:55.201170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.124 [2024-07-15 10:14:55.265283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.124 [2024-07-15 10:14:55.265305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.124 [2024-07-15 10:14:55.296741] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:18.124 [2024-07-15 10:14:55.296775] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:21.421 10:14:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2735308 /var/tmp/spdk-nbd.sock 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 2735308 ']' 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:21.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:21.421 10:14:58 event.app_repeat -- event/event.sh@39 -- # killprocess 2735308 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 2735308 ']' 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 2735308 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2735308 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2735308' 00:07:21.421 killing process with pid 2735308 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@967 -- # kill 2735308 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@972 -- # wait 2735308 00:07:21.421 spdk_app_start is called in Round 0. 00:07:21.421 Shutdown signal received, stop current app iteration 00:07:21.421 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:07:21.421 spdk_app_start is called in Round 1. 00:07:21.421 Shutdown signal received, stop current app iteration 00:07:21.421 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:07:21.421 spdk_app_start is called in Round 2. 00:07:21.421 Shutdown signal received, stop current app iteration 00:07:21.421 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 reinitialization... 00:07:21.421 spdk_app_start is called in Round 3. 00:07:21.421 Shutdown signal received, stop current app iteration 00:07:21.421 10:14:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:21.421 10:14:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:21.421 00:07:21.421 real 0m15.528s 00:07:21.421 user 0m33.597s 00:07:21.421 sys 0m2.090s 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.421 10:14:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:21.421 ************************************ 00:07:21.421 END TEST app_repeat 00:07:21.421 ************************************ 00:07:21.421 10:14:58 event -- common/autotest_common.sh@1142 -- # return 0 00:07:21.421 10:14:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:21.421 10:14:58 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:21.421 10:14:58 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.421 10:14:58 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.421 10:14:58 event -- common/autotest_common.sh@10 -- # set +x 00:07:21.422 ************************************ 00:07:21.422 START TEST cpu_locks 00:07:21.422 ************************************ 00:07:21.422 10:14:58 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:21.422 * Looking for test storage... 00:07:21.422 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:21.422 10:14:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:21.422 10:14:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:21.422 10:14:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:21.422 10:14:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:21.422 10:14:58 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.422 10:14:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.422 10:14:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.422 ************************************ 00:07:21.422 START TEST default_locks 00:07:21.422 ************************************ 00:07:21.422 10:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:21.422 10:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2738685 00:07:21.422 10:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2738685 00:07:21.422 10:14:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.422 10:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2738685 ']' 00:07:21.422 10:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.422 10:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.422 10:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.422 10:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.422 10:14:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.683 [2024-07-15 10:14:58.675039] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:21.683 [2024-07-15 10:14:58.675104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738685 ] 00:07:21.683 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.683 [2024-07-15 10:14:58.742670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.683 [2024-07-15 10:14:58.807147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.254 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.254 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:22.254 10:14:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2738685 00:07:22.254 10:14:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2738685 00:07:22.254 10:14:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.824 lslocks: write error 00:07:22.824 10:14:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2738685 00:07:22.824 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 2738685 ']' 00:07:22.824 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 2738685 00:07:22.824 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:22.824 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.824 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2738685 00:07:22.825 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.825 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.825 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2738685' 00:07:22.825 killing process with pid 2738685 00:07:22.825 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 2738685 00:07:22.825 10:14:59 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 2738685 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2738685 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2738685 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2738685 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 2738685 ']' 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.086 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2738685) - No such process 00:07:23.086 ERROR: process (pid: 2738685) is no longer running 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:23.086 00:07:23.086 real 0m1.459s 00:07:23.086 user 0m1.556s 00:07:23.086 sys 0m0.488s 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.086 10:15:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.086 ************************************ 00:07:23.086 END TEST default_locks 00:07:23.086 ************************************ 00:07:23.086 10:15:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:23.086 10:15:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:23.086 10:15:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:23.086 10:15:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.086 10:15:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.086 ************************************ 00:07:23.086 START TEST default_locks_via_rpc 00:07:23.086 ************************************ 00:07:23.086 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:23.086 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2739072 00:07:23.086 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2739072 00:07:23.086 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:23.086 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2739072 ']' 00:07:23.086 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.086 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.086 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.086 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.086 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.086 [2024-07-15 10:15:00.194415] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:23.086 [2024-07-15 10:15:00.194467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2739072 ] 00:07:23.086 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.086 [2024-07-15 10:15:00.262013] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.345 [2024-07-15 10:15:00.332673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2739072 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2739072 00:07:23.915 10:15:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.175 10:15:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2739072 00:07:24.175 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 2739072 ']' 00:07:24.175 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 2739072 00:07:24.175 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:24.175 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:24.175 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2739072 00:07:24.175 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:24.175 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:24.175 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2739072' 00:07:24.175 killing process with pid 2739072 00:07:24.175 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 2739072 00:07:24.175 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 2739072 00:07:24.436 00:07:24.436 real 0m1.252s 00:07:24.436 user 0m1.352s 00:07:24.436 sys 0m0.385s 00:07:24.436 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.436 10:15:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.436 ************************************ 00:07:24.436 END TEST default_locks_via_rpc 00:07:24.436 ************************************ 00:07:24.436 10:15:01 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:24.436 10:15:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:24.436 10:15:01 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.436 10:15:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.436 10:15:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.436 ************************************ 00:07:24.436 START TEST non_locking_app_on_locked_coremask 00:07:24.436 ************************************ 00:07:24.436 10:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:24.436 10:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2739357 00:07:24.436 10:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2739357 /var/tmp/spdk.sock 00:07:24.436 10:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.436 10:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2739357 ']' 00:07:24.436 10:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.436 10:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:24.436 10:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.436 10:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:24.436 10:15:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.436 [2024-07-15 10:15:01.521100] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:24.436 [2024-07-15 10:15:01.521156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2739357 ] 00:07:24.436 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.436 [2024-07-15 10:15:01.588298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.697 [2024-07-15 10:15:01.654215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.268 10:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.268 10:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:25.268 10:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:25.268 10:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2739541 00:07:25.268 10:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2739541 /var/tmp/spdk2.sock 00:07:25.268 10:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2739541 ']' 00:07:25.268 10:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.268 10:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.268 10:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.268 10:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.268 10:15:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.268 [2024-07-15 10:15:02.343157] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:25.268 [2024-07-15 10:15:02.343228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2739541 ] 00:07:25.268 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.268 [2024-07-15 10:15:02.445597] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.268 [2024-07-15 10:15:02.445629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.527 [2024-07-15 10:15:02.578969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.097 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.097 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:26.097 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2739357 00:07:26.097 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2739357 00:07:26.097 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:26.357 lslocks: write error 00:07:26.357 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2739357 00:07:26.357 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2739357 ']' 00:07:26.357 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2739357 00:07:26.357 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:26.357 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.357 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2739357 00:07:26.357 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.357 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.357 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2739357' 00:07:26.357 killing process with pid 2739357 00:07:26.357 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2739357 00:07:26.357 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2739357 00:07:26.983 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2739541 00:07:26.983 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2739541 ']' 00:07:26.983 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2739541 00:07:26.983 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:26.983 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.983 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2739541 00:07:26.983 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.983 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.983 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2739541' 00:07:26.983 killing process with pid 2739541 00:07:26.983 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2739541 00:07:26.983 10:15:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2739541 00:07:27.244 00:07:27.244 real 0m2.732s 00:07:27.244 user 0m2.996s 00:07:27.244 sys 0m0.801s 00:07:27.244 10:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.244 10:15:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.244 ************************************ 00:07:27.244 END TEST non_locking_app_on_locked_coremask 00:07:27.244 ************************************ 00:07:27.244 10:15:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:27.244 10:15:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:27.244 10:15:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.244 10:15:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.244 10:15:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.244 ************************************ 00:07:27.244 START TEST locking_app_on_unlocked_coremask 00:07:27.244 ************************************ 00:07:27.244 10:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:27.244 10:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2739917 00:07:27.244 10:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2739917 /var/tmp/spdk.sock 00:07:27.244 10:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:27.244 10:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2739917 ']' 00:07:27.244 10:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.244 10:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.244 10:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.244 10:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.244 10:15:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.244 [2024-07-15 10:15:04.325136] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:27.244 [2024-07-15 10:15:04.325195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2739917 ] 00:07:27.244 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.244 [2024-07-15 10:15:04.393434] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.244 [2024-07-15 10:15:04.393466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.504 [2024-07-15 10:15:04.463739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.070 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.070 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:28.070 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2740246 00:07:28.070 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2740246 /var/tmp/spdk2.sock 00:07:28.070 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2740246 ']' 00:07:28.070 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:28.070 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.070 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.070 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.070 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.070 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.070 [2024-07-15 10:15:05.156884] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:28.070 [2024-07-15 10:15:05.156941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740246 ] 00:07:28.070 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.070 [2024-07-15 10:15:05.254614] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.329 [2024-07-15 10:15:05.383752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.897 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.897 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:28.897 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2740246 00:07:28.897 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2740246 00:07:28.897 10:15:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.465 lslocks: write error 00:07:29.465 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2739917 00:07:29.465 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2739917 ']' 00:07:29.465 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2739917 00:07:29.465 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:29.465 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:29.465 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2739917 00:07:29.465 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:29.465 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:29.465 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2739917' 00:07:29.465 killing process with pid 2739917 00:07:29.465 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2739917 00:07:29.465 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2739917 00:07:30.034 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2740246 00:07:30.034 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2740246 ']' 00:07:30.034 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 2740246 00:07:30.034 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:30.034 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.034 10:15:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2740246 00:07:30.034 10:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.034 10:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.034 10:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2740246' 00:07:30.034 killing process with pid 2740246 00:07:30.034 10:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 2740246 00:07:30.034 10:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 2740246 00:07:30.294 00:07:30.294 real 0m2.962s 00:07:30.294 user 0m3.226s 00:07:30.294 sys 0m0.902s 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.294 ************************************ 00:07:30.294 END TEST locking_app_on_unlocked_coremask 00:07:30.294 ************************************ 00:07:30.294 10:15:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:30.294 10:15:07 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:30.294 10:15:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:30.294 10:15:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.294 10:15:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.294 ************************************ 00:07:30.294 START TEST locking_app_on_locked_coremask 00:07:30.294 ************************************ 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2740621 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2740621 /var/tmp/spdk.sock 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2740621 ']' 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.294 10:15:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.294 [2024-07-15 10:15:07.359353] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:30.294 [2024-07-15 10:15:07.359403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740621 ] 00:07:30.294 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.294 [2024-07-15 10:15:07.426802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.294 [2024-07-15 10:15:07.490756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2740777 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2740777 /var/tmp/spdk2.sock 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2740777 /var/tmp/spdk2.sock 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2740777 /var/tmp/spdk2.sock 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 2740777 ']' 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.233 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.233 [2024-07-15 10:15:08.186427] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:31.233 [2024-07-15 10:15:08.186480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740777 ] 00:07:31.233 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.233 [2024-07-15 10:15:08.286784] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2740621 has claimed it. 00:07:31.233 [2024-07-15 10:15:08.286827] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:31.803 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2740777) - No such process 00:07:31.803 ERROR: process (pid: 2740777) is no longer running 00:07:31.803 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:31.803 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:31.803 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:31.803 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:31.803 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:31.803 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:31.803 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2740621 00:07:31.803 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2740621 00:07:31.803 10:15:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:32.374 lslocks: write error 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2740621 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 2740621 ']' 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 2740621 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2740621 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2740621' 00:07:32.374 killing process with pid 2740621 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 2740621 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 2740621 00:07:32.374 00:07:32.374 real 0m2.250s 00:07:32.374 user 0m2.475s 00:07:32.374 sys 0m0.652s 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.374 10:15:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.374 ************************************ 00:07:32.374 END TEST locking_app_on_locked_coremask 00:07:32.374 ************************************ 00:07:32.636 10:15:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:32.636 10:15:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:32.636 10:15:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:32.636 10:15:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.636 10:15:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.636 ************************************ 00:07:32.636 START TEST locking_overlapped_coremask 00:07:32.636 ************************************ 00:07:32.636 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:32.636 10:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2741342 00:07:32.636 10:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2741342 /var/tmp/spdk.sock 00:07:32.636 10:15:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:32.636 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2741342 ']' 00:07:32.636 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.636 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:32.636 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.636 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:32.636 10:15:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.636 [2024-07-15 10:15:09.697220] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:32.636 [2024-07-15 10:15:09.697295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741342 ] 00:07:32.636 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.636 [2024-07-15 10:15:09.766884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.896 [2024-07-15 10:15:09.841338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.896 [2024-07-15 10:15:09.841631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.896 [2024-07-15 10:15:09.841634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2741762 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2741762 /var/tmp/spdk2.sock 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2741762 /var/tmp/spdk2.sock 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2741762 /var/tmp/spdk2.sock 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 2741762 ']' 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.464 10:15:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.464 [2024-07-15 10:15:10.512028] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:33.464 [2024-07-15 10:15:10.512081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741762 ] 00:07:33.464 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.464 [2024-07-15 10:15:10.592366] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2741342 has claimed it. 00:07:33.464 [2024-07-15 10:15:10.592400] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:34.032 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2741762) - No such process 00:07:34.032 ERROR: process (pid: 2741762) is no longer running 00:07:34.032 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2741342 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 2741342 ']' 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 2741342 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2741342 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2741342' 00:07:34.033 killing process with pid 2741342 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 2741342 00:07:34.033 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 2741342 00:07:34.292 00:07:34.292 real 0m1.758s 00:07:34.292 user 0m4.923s 00:07:34.292 sys 0m0.379s 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.292 ************************************ 00:07:34.292 END TEST locking_overlapped_coremask 00:07:34.292 ************************************ 00:07:34.292 10:15:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:34.292 10:15:11 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:34.292 10:15:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:34.292 10:15:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.292 10:15:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.292 ************************************ 00:07:34.292 START TEST locking_overlapped_coremask_via_rpc 00:07:34.292 ************************************ 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2741916 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2741916 /var/tmp/spdk.sock 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2741916 ']' 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:34.292 10:15:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.551 [2024-07-15 10:15:11.518588] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:34.551 [2024-07-15 10:15:11.518646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2741916 ] 00:07:34.551 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.551 [2024-07-15 10:15:11.587514] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:34.551 [2024-07-15 10:15:11.587545] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.551 [2024-07-15 10:15:11.660135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.551 [2024-07-15 10:15:11.660252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.551 [2024-07-15 10:15:11.660282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.120 10:15:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.120 10:15:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:35.120 10:15:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2742156 00:07:35.120 10:15:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2742156 /var/tmp/spdk2.sock 00:07:35.120 10:15:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2742156 ']' 00:07:35.120 10:15:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:35.120 10:15:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.120 10:15:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.120 10:15:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.120 10:15:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.120 10:15:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.380 [2024-07-15 10:15:12.346379] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:35.380 [2024-07-15 10:15:12.346433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742156 ] 00:07:35.380 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.380 [2024-07-15 10:15:12.428252] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:35.380 [2024-07-15 10:15:12.428274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.380 [2024-07-15 10:15:12.533596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.380 [2024-07-15 10:15:12.537350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.380 [2024-07-15 10:15:12.537352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:35.947 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.947 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:35.947 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:35.947 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.947 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.947 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.948 [2024-07-15 10:15:13.121289] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2741916 has claimed it. 00:07:35.948 request: 00:07:35.948 { 00:07:35.948 "method": "framework_enable_cpumask_locks", 00:07:35.948 "req_id": 1 00:07:35.948 } 00:07:35.948 Got JSON-RPC error response 00:07:35.948 response: 00:07:35.948 { 00:07:35.948 "code": -32603, 00:07:35.948 "message": "Failed to claim CPU core: 2" 00:07:35.948 } 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2741916 /var/tmp/spdk.sock 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2741916 ']' 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.948 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.205 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.205 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:36.205 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2742156 /var/tmp/spdk2.sock 00:07:36.205 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 2742156 ']' 00:07:36.205 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.205 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.205 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.205 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.205 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.464 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.464 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:36.464 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:36.464 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:36.464 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:36.464 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:36.464 00:07:36.464 real 0m2.008s 00:07:36.464 user 0m0.772s 00:07:36.464 sys 0m0.156s 00:07:36.464 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.464 10:15:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.464 ************************************ 00:07:36.464 END TEST locking_overlapped_coremask_via_rpc 00:07:36.464 ************************************ 00:07:36.464 10:15:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:36.464 10:15:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:36.464 10:15:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2741916 ]] 00:07:36.464 10:15:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2741916 00:07:36.464 10:15:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2741916 ']' 00:07:36.464 10:15:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2741916 00:07:36.464 10:15:13 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:36.464 10:15:13 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.464 10:15:13 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2741916 00:07:36.464 10:15:13 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.464 10:15:13 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.464 10:15:13 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2741916' 00:07:36.464 killing process with pid 2741916 00:07:36.464 10:15:13 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2741916 00:07:36.464 10:15:13 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2741916 00:07:36.724 10:15:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2742156 ]] 00:07:36.724 10:15:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2742156 00:07:36.724 10:15:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2742156 ']' 00:07:36.724 10:15:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2742156 00:07:36.724 10:15:13 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:36.724 10:15:13 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.724 10:15:13 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2742156 00:07:36.724 10:15:13 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:36.724 10:15:13 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:36.724 10:15:13 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2742156' 00:07:36.724 killing process with pid 2742156 00:07:36.724 10:15:13 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 2742156 00:07:36.724 10:15:13 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 2742156 00:07:36.983 10:15:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:36.983 10:15:14 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:36.983 10:15:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2741916 ]] 00:07:36.984 10:15:14 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2741916 00:07:36.984 10:15:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2741916 ']' 00:07:36.984 10:15:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2741916 00:07:36.984 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2741916) - No such process 00:07:36.984 10:15:14 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2741916 is not found' 00:07:36.984 Process with pid 2741916 is not found 00:07:36.984 10:15:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2742156 ]] 00:07:36.984 10:15:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2742156 00:07:36.984 10:15:14 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 2742156 ']' 00:07:36.984 10:15:14 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 2742156 00:07:36.984 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2742156) - No such process 00:07:36.984 10:15:14 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 2742156 is not found' 00:07:36.984 Process with pid 2742156 is not found 00:07:36.984 10:15:14 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:36.984 00:07:36.984 real 0m15.562s 00:07:36.984 user 0m26.862s 00:07:36.984 sys 0m4.622s 00:07:36.984 10:15:14 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.984 10:15:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.984 ************************************ 00:07:36.984 END TEST cpu_locks 00:07:36.984 ************************************ 00:07:36.984 10:15:14 event -- common/autotest_common.sh@1142 -- # return 0 00:07:36.984 00:07:36.984 real 0m41.075s 00:07:36.984 user 1m20.759s 00:07:36.984 sys 0m7.668s 00:07:36.984 10:15:14 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.984 10:15:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:36.984 ************************************ 00:07:36.984 END TEST event 00:07:36.984 ************************************ 00:07:36.984 10:15:14 -- common/autotest_common.sh@1142 -- # return 0 00:07:36.984 10:15:14 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:36.984 10:15:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.984 10:15:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.984 10:15:14 -- common/autotest_common.sh@10 -- # set +x 00:07:36.984 ************************************ 00:07:36.984 START TEST thread 00:07:36.984 ************************************ 00:07:36.984 10:15:14 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:37.244 * Looking for test storage... 00:07:37.244 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:37.244 10:15:14 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:37.244 10:15:14 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:37.244 10:15:14 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.244 10:15:14 thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.244 ************************************ 00:07:37.244 START TEST thread_poller_perf 00:07:37.244 ************************************ 00:07:37.244 10:15:14 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:37.244 [2024-07-15 10:15:14.301453] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:37.244 [2024-07-15 10:15:14.301565] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742593 ] 00:07:37.244 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.244 [2024-07-15 10:15:14.377607] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.504 [2024-07-15 10:15:14.450900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.504 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:38.445 ====================================== 00:07:38.445 busy:2414742248 (cyc) 00:07:38.445 total_run_count: 288000 00:07:38.445 tsc_hz: 2400000000 (cyc) 00:07:38.445 ====================================== 00:07:38.445 poller_cost: 8384 (cyc), 3493 (nsec) 00:07:38.445 00:07:38.445 real 0m1.237s 00:07:38.445 user 0m1.141s 00:07:38.445 sys 0m0.090s 00:07:38.445 10:15:15 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.445 10:15:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:38.445 ************************************ 00:07:38.445 END TEST thread_poller_perf 00:07:38.445 ************************************ 00:07:38.445 10:15:15 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:38.445 10:15:15 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:38.445 10:15:15 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:38.445 10:15:15 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.445 10:15:15 thread -- common/autotest_common.sh@10 -- # set +x 00:07:38.445 ************************************ 00:07:38.445 START TEST thread_poller_perf 00:07:38.445 ************************************ 00:07:38.445 10:15:15 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:38.445 [2024-07-15 10:15:15.613536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:38.445 [2024-07-15 10:15:15.613631] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742941 ] 00:07:38.706 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.706 [2024-07-15 10:15:15.683372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.706 [2024-07-15 10:15:15.745382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.706 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:39.649 ====================================== 00:07:39.649 busy:2401948068 (cyc) 00:07:39.649 total_run_count: 3807000 00:07:39.649 tsc_hz: 2400000000 (cyc) 00:07:39.649 ====================================== 00:07:39.649 poller_cost: 630 (cyc), 262 (nsec) 00:07:39.649 00:07:39.649 real 0m1.209s 00:07:39.649 user 0m1.130s 00:07:39.649 sys 0m0.075s 00:07:39.649 10:15:16 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.649 10:15:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:39.649 ************************************ 00:07:39.649 END TEST thread_poller_perf 00:07:39.649 ************************************ 00:07:39.649 10:15:16 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:39.649 10:15:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:39.649 00:07:39.649 real 0m2.700s 00:07:39.649 user 0m2.374s 00:07:39.649 sys 0m0.333s 00:07:39.649 10:15:16 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.649 10:15:16 thread -- common/autotest_common.sh@10 -- # set +x 00:07:39.649 ************************************ 00:07:39.649 END TEST thread 00:07:39.649 ************************************ 00:07:39.910 10:15:16 -- common/autotest_common.sh@1142 -- # return 0 00:07:39.910 10:15:16 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:39.910 10:15:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.910 10:15:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.910 10:15:16 -- common/autotest_common.sh@10 -- # set +x 00:07:39.910 ************************************ 00:07:39.910 START TEST accel 00:07:39.910 ************************************ 00:07:39.910 10:15:16 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:39.910 * Looking for test storage... 00:07:39.910 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:39.910 10:15:17 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:39.910 10:15:17 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:39.910 10:15:17 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:39.910 10:15:17 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2743328 00:07:39.910 10:15:17 accel -- accel/accel.sh@63 -- # waitforlisten 2743328 00:07:39.910 10:15:17 accel -- common/autotest_common.sh@829 -- # '[' -z 2743328 ']' 00:07:39.910 10:15:17 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.910 10:15:17 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.910 10:15:17 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.910 10:15:17 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:39.910 10:15:17 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.910 10:15:17 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:39.910 10:15:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:39.910 10:15:17 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.910 10:15:17 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.910 10:15:17 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.910 10:15:17 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.910 10:15:17 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.910 10:15:17 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:39.910 10:15:17 accel -- accel/accel.sh@41 -- # jq -r . 00:07:39.910 [2024-07-15 10:15:17.066960] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:39.910 [2024-07-15 10:15:17.067027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743328 ] 00:07:39.910 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.171 [2024-07-15 10:15:17.140523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.171 [2024-07-15 10:15:17.214508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.740 10:15:17 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.740 10:15:17 accel -- common/autotest_common.sh@862 -- # return 0 00:07:40.740 10:15:17 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:40.740 10:15:17 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:40.740 10:15:17 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:40.740 10:15:17 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:40.740 10:15:17 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:40.740 10:15:17 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:40.740 10:15:17 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:40.740 10:15:17 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:40.740 10:15:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.740 10:15:17 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # IFS== 00:07:40.740 10:15:17 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:40.740 10:15:17 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:40.740 10:15:17 accel -- accel/accel.sh@75 -- # killprocess 2743328 00:07:40.740 10:15:17 accel -- common/autotest_common.sh@948 -- # '[' -z 2743328 ']' 00:07:40.740 10:15:17 accel -- common/autotest_common.sh@952 -- # kill -0 2743328 00:07:40.740 10:15:17 accel -- common/autotest_common.sh@953 -- # uname 00:07:40.740 10:15:17 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:40.740 10:15:17 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2743328 00:07:41.014 10:15:17 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.014 10:15:17 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.014 10:15:17 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2743328' 00:07:41.014 killing process with pid 2743328 00:07:41.014 10:15:17 accel -- common/autotest_common.sh@967 -- # kill 2743328 00:07:41.014 10:15:17 accel -- common/autotest_common.sh@972 -- # wait 2743328 00:07:41.014 10:15:18 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:41.014 10:15:18 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:41.014 10:15:18 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:41.014 10:15:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.014 10:15:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.014 10:15:18 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:41.014 10:15:18 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:41.014 10:15:18 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:41.014 10:15:18 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.014 10:15:18 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.014 10:15:18 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.014 10:15:18 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.274 10:15:18 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.274 10:15:18 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:41.274 10:15:18 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:41.274 10:15:18 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.274 10:15:18 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:41.274 10:15:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.274 10:15:18 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:41.274 10:15:18 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:41.274 10:15:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.274 10:15:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.274 ************************************ 00:07:41.274 START TEST accel_missing_filename 00:07:41.274 ************************************ 00:07:41.274 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:41.274 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:41.274 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:41.274 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:41.274 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.274 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:41.274 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.274 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:41.274 10:15:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:41.274 10:15:18 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:41.274 10:15:18 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.274 10:15:18 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.274 10:15:18 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.274 10:15:18 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.274 10:15:18 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.274 10:15:18 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:41.274 10:15:18 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:41.274 [2024-07-15 10:15:18.349263] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:41.274 [2024-07-15 10:15:18.349370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743512 ] 00:07:41.274 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.274 [2024-07-15 10:15:18.428662] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.535 [2024-07-15 10:15:18.503749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.535 [2024-07-15 10:15:18.536296] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.535 [2024-07-15 10:15:18.573486] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:41.535 A filename is required. 00:07:41.535 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:41.535 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:41.535 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:41.535 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:41.535 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:41.535 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:41.535 00:07:41.535 real 0m0.314s 00:07:41.535 user 0m0.235s 00:07:41.535 sys 0m0.121s 00:07:41.535 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.535 10:15:18 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:41.535 ************************************ 00:07:41.535 END TEST accel_missing_filename 00:07:41.535 ************************************ 00:07:41.535 10:15:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.535 10:15:18 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:41.535 10:15:18 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:41.535 10:15:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.535 10:15:18 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.535 ************************************ 00:07:41.535 START TEST accel_compress_verify 00:07:41.535 ************************************ 00:07:41.535 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:41.535 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:41.535 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:41.535 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:41.535 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.535 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:41.535 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:41.535 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:41.535 10:15:18 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:41.535 10:15:18 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:41.535 10:15:18 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.535 10:15:18 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.535 10:15:18 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.535 10:15:18 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.535 10:15:18 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.535 10:15:18 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:41.535 10:15:18 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:41.796 [2024-07-15 10:15:18.732576] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:41.796 [2024-07-15 10:15:18.732641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743730 ] 00:07:41.796 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.796 [2024-07-15 10:15:18.801867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.796 [2024-07-15 10:15:18.867385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.796 [2024-07-15 10:15:18.899280] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:41.796 [2024-07-15 10:15:18.936421] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:41.796 00:07:41.796 Compression does not support the verify option, aborting. 00:07:41.796 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:41.796 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:41.796 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:41.796 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:41.796 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:41.796 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:41.796 00:07:41.796 real 0m0.287s 00:07:41.796 user 0m0.219s 00:07:41.796 sys 0m0.110s 00:07:41.796 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.796 10:15:18 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:41.796 ************************************ 00:07:41.796 END TEST accel_compress_verify 00:07:41.796 ************************************ 00:07:42.058 10:15:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.058 10:15:19 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:42.058 10:15:19 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:42.058 10:15:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.058 10:15:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.058 ************************************ 00:07:42.058 START TEST accel_wrong_workload 00:07:42.058 ************************************ 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:42.058 10:15:19 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:42.058 10:15:19 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:42.058 10:15:19 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.058 10:15:19 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.058 10:15:19 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.058 10:15:19 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.058 10:15:19 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.058 10:15:19 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:42.058 10:15:19 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:42.058 Unsupported workload type: foobar 00:07:42.058 [2024-07-15 10:15:19.096566] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:42.058 accel_perf options: 00:07:42.058 [-h help message] 00:07:42.058 [-q queue depth per core] 00:07:42.058 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:42.058 [-T number of threads per core 00:07:42.058 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:42.058 [-t time in seconds] 00:07:42.058 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:42.058 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:42.058 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:42.058 [-l for compress/decompress workloads, name of uncompressed input file 00:07:42.058 [-S for crc32c workload, use this seed value (default 0) 00:07:42.058 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:42.058 [-f for fill workload, use this BYTE value (default 255) 00:07:42.058 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:42.058 [-y verify result if this switch is on] 00:07:42.058 [-a tasks to allocate per core (default: same value as -q)] 00:07:42.058 Can be used to spread operations across a wider range of memory. 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.058 00:07:42.058 real 0m0.038s 00:07:42.058 user 0m0.025s 00:07:42.058 sys 0m0.013s 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.058 10:15:19 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:42.058 ************************************ 00:07:42.058 END TEST accel_wrong_workload 00:07:42.058 ************************************ 00:07:42.058 Error: writing output failed: Broken pipe 00:07:42.058 10:15:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.058 10:15:19 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:42.058 10:15:19 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:42.058 10:15:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.058 10:15:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.058 ************************************ 00:07:42.058 START TEST accel_negative_buffers 00:07:42.058 ************************************ 00:07:42.058 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:42.058 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:42.058 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:42.058 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:42.058 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.058 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:42.058 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.058 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:42.058 10:15:19 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:42.058 10:15:19 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:42.059 10:15:19 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.059 10:15:19 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.059 10:15:19 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.059 10:15:19 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.059 10:15:19 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.059 10:15:19 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:42.059 10:15:19 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:42.059 -x option must be non-negative. 00:07:42.059 [2024-07-15 10:15:19.207640] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:42.059 accel_perf options: 00:07:42.059 [-h help message] 00:07:42.059 [-q queue depth per core] 00:07:42.059 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:42.059 [-T number of threads per core 00:07:42.059 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:42.059 [-t time in seconds] 00:07:42.059 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:42.059 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:42.059 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:42.059 [-l for compress/decompress workloads, name of uncompressed input file 00:07:42.059 [-S for crc32c workload, use this seed value (default 0) 00:07:42.059 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:42.059 [-f for fill workload, use this BYTE value (default 255) 00:07:42.059 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:42.059 [-y verify result if this switch is on] 00:07:42.059 [-a tasks to allocate per core (default: same value as -q)] 00:07:42.059 Can be used to spread operations across a wider range of memory. 00:07:42.059 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:42.059 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.059 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.059 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.059 00:07:42.059 real 0m0.036s 00:07:42.059 user 0m0.023s 00:07:42.059 sys 0m0.013s 00:07:42.059 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.059 10:15:19 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:42.059 ************************************ 00:07:42.059 END TEST accel_negative_buffers 00:07:42.059 ************************************ 00:07:42.059 Error: writing output failed: Broken pipe 00:07:42.059 10:15:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.059 10:15:19 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:42.059 10:15:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:42.059 10:15:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.059 10:15:19 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.321 ************************************ 00:07:42.321 START TEST accel_crc32c 00:07:42.321 ************************************ 00:07:42.321 10:15:19 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:42.321 [2024-07-15 10:15:19.315168] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:42.321 [2024-07-15 10:15:19.315247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743796 ] 00:07:42.321 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.321 [2024-07-15 10:15:19.383273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.321 [2024-07-15 10:15:19.448742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:42.321 10:15:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:43.708 10:15:20 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.708 00:07:43.708 real 0m1.292s 00:07:43.708 user 0m1.196s 00:07:43.708 sys 0m0.107s 00:07:43.708 10:15:20 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.708 10:15:20 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:43.708 ************************************ 00:07:43.708 END TEST accel_crc32c 00:07:43.708 ************************************ 00:07:43.709 10:15:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.709 10:15:20 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:43.709 10:15:20 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:43.709 10:15:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.709 10:15:20 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.709 ************************************ 00:07:43.709 START TEST accel_crc32c_C2 00:07:43.709 ************************************ 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:43.709 [2024-07-15 10:15:20.679507] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:43.709 [2024-07-15 10:15:20.679572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744147 ] 00:07:43.709 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.709 [2024-07-15 10:15:20.748561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.709 [2024-07-15 10:15:20.819126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:43.709 10:15:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.095 00:07:45.095 real 0m1.298s 00:07:45.095 user 0m1.198s 00:07:45.095 sys 0m0.111s 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.095 10:15:21 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:45.095 ************************************ 00:07:45.095 END TEST accel_crc32c_C2 00:07:45.095 ************************************ 00:07:45.095 10:15:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.095 10:15:21 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:45.095 10:15:21 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:45.095 10:15:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.095 10:15:21 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.095 ************************************ 00:07:45.095 START TEST accel_copy 00:07:45.095 ************************************ 00:07:45.095 10:15:22 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:45.095 10:15:22 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:45.095 10:15:22 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:45.095 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.095 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.095 10:15:22 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:45.096 [2024-07-15 10:15:22.052551] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:45.096 [2024-07-15 10:15:22.052618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744471 ] 00:07:45.096 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.096 [2024-07-15 10:15:22.122918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.096 [2024-07-15 10:15:22.193733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:45.096 10:15:22 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.482 10:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:46.483 10:15:23 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.483 00:07:46.483 real 0m1.299s 00:07:46.483 user 0m1.201s 00:07:46.483 sys 0m0.109s 00:07:46.483 10:15:23 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.483 10:15:23 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:46.483 ************************************ 00:07:46.483 END TEST accel_copy 00:07:46.483 ************************************ 00:07:46.483 10:15:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.483 10:15:23 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.483 10:15:23 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:46.483 10:15:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.483 10:15:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.483 ************************************ 00:07:46.483 START TEST accel_fill 00:07:46.483 ************************************ 00:07:46.483 10:15:23 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:46.483 [2024-07-15 10:15:23.427323] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:46.483 [2024-07-15 10:15:23.427418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744663 ] 00:07:46.483 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.483 [2024-07-15 10:15:23.497361] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.483 [2024-07-15 10:15:23.567795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:46.483 10:15:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.869 10:15:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.870 10:15:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:47.870 10:15:24 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.870 10:15:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:47.870 10:15:24 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.870 00:07:47.870 real 0m1.300s 00:07:47.870 user 0m1.201s 00:07:47.870 sys 0m0.110s 00:07:47.870 10:15:24 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.870 10:15:24 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:47.870 ************************************ 00:07:47.870 END TEST accel_fill 00:07:47.870 ************************************ 00:07:47.870 10:15:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:47.870 10:15:24 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:47.870 10:15:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:47.870 10:15:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.870 10:15:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.870 ************************************ 00:07:47.870 START TEST accel_copy_crc32c 00:07:47.870 ************************************ 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:47.870 [2024-07-15 10:15:24.799772] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:47.870 [2024-07-15 10:15:24.799863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744898 ] 00:07:47.870 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.870 [2024-07-15 10:15:24.870212] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.870 [2024-07-15 10:15:24.941840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.870 10:15:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.257 00:07:49.257 real 0m1.301s 00:07:49.257 user 0m1.199s 00:07:49.257 sys 0m0.112s 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.257 10:15:26 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:49.257 ************************************ 00:07:49.257 END TEST accel_copy_crc32c 00:07:49.257 ************************************ 00:07:49.257 10:15:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.257 10:15:26 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:49.257 10:15:26 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:49.257 10:15:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.257 10:15:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.257 ************************************ 00:07:49.257 START TEST accel_copy_crc32c_C2 00:07:49.257 ************************************ 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:49.257 [2024-07-15 10:15:26.174980] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:49.257 [2024-07-15 10:15:26.175074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745248 ] 00:07:49.257 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.257 [2024-07-15 10:15:26.242993] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.257 [2024-07-15 10:15:26.309419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:49.257 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.258 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:49.258 10:15:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.652 00:07:50.652 real 0m1.293s 00:07:50.652 user 0m1.194s 00:07:50.652 sys 0m0.111s 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.652 10:15:27 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:50.652 ************************************ 00:07:50.652 END TEST accel_copy_crc32c_C2 00:07:50.652 ************************************ 00:07:50.652 10:15:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:50.652 10:15:27 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:50.652 10:15:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:50.652 10:15:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.653 10:15:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.653 ************************************ 00:07:50.653 START TEST accel_dualcast 00:07:50.653 ************************************ 00:07:50.653 10:15:27 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:50.653 [2024-07-15 10:15:27.542206] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:50.653 [2024-07-15 10:15:27.542274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745603 ] 00:07:50.653 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.653 [2024-07-15 10:15:27.609547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.653 [2024-07-15 10:15:27.676080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:50.653 10:15:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:52.041 10:15:28 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.041 00:07:52.041 real 0m1.292s 00:07:52.041 user 0m1.188s 00:07:52.041 sys 0m0.115s 00:07:52.041 10:15:28 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.041 10:15:28 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:52.041 ************************************ 00:07:52.041 END TEST accel_dualcast 00:07:52.041 ************************************ 00:07:52.041 10:15:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:52.041 10:15:28 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:52.041 10:15:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:52.041 10:15:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.041 10:15:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.041 ************************************ 00:07:52.041 START TEST accel_compare 00:07:52.041 ************************************ 00:07:52.041 10:15:28 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:52.041 10:15:28 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:52.041 [2024-07-15 10:15:28.910568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:52.041 [2024-07-15 10:15:28.910654] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2745950 ] 00:07:52.041 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.041 [2024-07-15 10:15:28.979642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.041 [2024-07-15 10:15:29.048121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.041 10:15:29 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:52.984 10:15:30 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.984 00:07:52.984 real 0m1.297s 00:07:52.984 user 0m1.201s 00:07:52.984 sys 0m0.107s 00:07:52.984 10:15:30 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.246 10:15:30 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:53.246 ************************************ 00:07:53.246 END TEST accel_compare 00:07:53.246 ************************************ 00:07:53.246 10:15:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:53.246 10:15:30 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:53.246 10:15:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:53.246 10:15:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.246 10:15:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.246 ************************************ 00:07:53.246 START TEST accel_xor 00:07:53.246 ************************************ 00:07:53.246 10:15:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:53.246 10:15:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:53.246 [2024-07-15 10:15:30.283239] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:53.246 [2024-07-15 10:15:30.283337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746148 ] 00:07:53.246 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.246 [2024-07-15 10:15:30.352707] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.246 [2024-07-15 10:15:30.422464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.507 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.507 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.507 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.507 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.507 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:53.508 10:15:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.451 00:07:54.451 real 0m1.300s 00:07:54.451 user 0m1.197s 00:07:54.451 sys 0m0.115s 00:07:54.451 10:15:31 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.451 10:15:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:54.451 ************************************ 00:07:54.451 END TEST accel_xor 00:07:54.451 ************************************ 00:07:54.451 10:15:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:54.451 10:15:31 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:54.451 10:15:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:54.451 10:15:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.451 10:15:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:54.451 ************************************ 00:07:54.451 START TEST accel_xor 00:07:54.451 ************************************ 00:07:54.451 10:15:31 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:54.451 10:15:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:54.712 [2024-07-15 10:15:31.656300] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:54.712 [2024-07-15 10:15:31.656390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746354 ] 00:07:54.712 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.712 [2024-07-15 10:15:31.727698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.712 [2024-07-15 10:15:31.797998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:54.712 10:15:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:56.207 10:15:32 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.207 00:07:56.207 real 0m1.300s 00:07:56.207 user 0m1.188s 00:07:56.207 sys 0m0.123s 00:07:56.207 10:15:32 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.207 10:15:32 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:56.207 ************************************ 00:07:56.207 END TEST accel_xor 00:07:56.207 ************************************ 00:07:56.207 10:15:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:56.207 10:15:32 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:56.207 10:15:32 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:56.207 10:15:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.207 10:15:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.207 ************************************ 00:07:56.207 START TEST accel_dif_verify 00:07:56.207 ************************************ 00:07:56.207 10:15:33 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:56.207 [2024-07-15 10:15:33.034362] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:56.207 [2024-07-15 10:15:33.034465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2746694 ] 00:07:56.207 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.207 [2024-07-15 10:15:33.107573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.207 [2024-07-15 10:15:33.170968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.207 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:56.208 10:15:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:57.148 10:15:34 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.148 00:07:57.148 real 0m1.297s 00:07:57.148 user 0m1.202s 00:07:57.148 sys 0m0.108s 00:07:57.148 10:15:34 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.148 10:15:34 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:57.148 ************************************ 00:07:57.148 END TEST accel_dif_verify 00:07:57.148 ************************************ 00:07:57.148 10:15:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:57.148 10:15:34 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:57.148 10:15:34 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:57.148 10:15:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.148 10:15:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.409 ************************************ 00:07:57.409 START TEST accel_dif_generate 00:07:57.409 ************************************ 00:07:57.409 10:15:34 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:57.409 [2024-07-15 10:15:34.402383] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:57.409 [2024-07-15 10:15:34.402459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2747043 ] 00:07:57.409 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.409 [2024-07-15 10:15:34.470937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.409 [2024-07-15 10:15:34.538805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:57.409 10:15:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:58.795 10:15:35 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.795 00:07:58.795 real 0m1.294s 00:07:58.795 user 0m1.200s 00:07:58.795 sys 0m0.106s 00:07:58.795 10:15:35 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.795 10:15:35 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:58.795 ************************************ 00:07:58.795 END TEST accel_dif_generate 00:07:58.795 ************************************ 00:07:58.795 10:15:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:58.795 10:15:35 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:58.795 10:15:35 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:58.795 10:15:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.795 10:15:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:58.795 ************************************ 00:07:58.795 START TEST accel_dif_generate_copy 00:07:58.795 ************************************ 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:58.795 [2024-07-15 10:15:35.772526] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:58.795 [2024-07-15 10:15:35.772591] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2747398 ] 00:07:58.795 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.795 [2024-07-15 10:15:35.841224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.795 [2024-07-15 10:15:35.905626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:58.795 10:15:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.179 00:08:00.179 real 0m1.292s 00:08:00.179 user 0m1.195s 00:08:00.179 sys 0m0.109s 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.179 10:15:37 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:08:00.179 ************************************ 00:08:00.179 END TEST accel_dif_generate_copy 00:08:00.179 ************************************ 00:08:00.179 10:15:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:00.179 10:15:37 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:00.179 10:15:37 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:00.179 10:15:37 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:08:00.179 10:15:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.179 10:15:37 accel -- common/autotest_common.sh@10 -- # set +x 00:08:00.179 ************************************ 00:08:00.179 START TEST accel_comp 00:08:00.179 ************************************ 00:08:00.179 10:15:37 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:08:00.179 [2024-07-15 10:15:37.135708] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:00.179 [2024-07-15 10:15:37.135764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2747637 ] 00:08:00.179 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.179 [2024-07-15 10:15:37.202944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.179 [2024-07-15 10:15:37.271143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:08:00.179 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:00.180 10:15:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:01.562 10:15:38 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.562 00:08:01.562 real 0m1.293s 00:08:01.562 user 0m1.199s 00:08:01.562 sys 0m0.108s 00:08:01.562 10:15:38 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.562 10:15:38 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:08:01.562 ************************************ 00:08:01.562 END TEST accel_comp 00:08:01.562 ************************************ 00:08:01.562 10:15:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:01.562 10:15:38 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:01.562 10:15:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:08:01.562 10:15:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.562 10:15:38 accel -- common/autotest_common.sh@10 -- # set +x 00:08:01.562 ************************************ 00:08:01.562 START TEST accel_decomp 00:08:01.562 ************************************ 00:08:01.562 10:15:38 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:01.562 10:15:38 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:08:01.562 10:15:38 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:08:01.562 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.562 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.562 10:15:38 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:01.562 10:15:38 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:01.562 10:15:38 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:08:01.562 10:15:38 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:01.562 10:15:38 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:01.562 10:15:38 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:08:01.563 [2024-07-15 10:15:38.506751] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:01.563 [2024-07-15 10:15:38.506827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2747826 ] 00:08:01.563 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.563 [2024-07-15 10:15:38.584678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.563 [2024-07-15 10:15:38.654655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:01.563 10:15:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:02.946 10:15:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:02.946 00:08:02.946 real 0m1.308s 00:08:02.946 user 0m1.206s 00:08:02.946 sys 0m0.115s 00:08:02.946 10:15:39 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.946 10:15:39 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:08:02.946 ************************************ 00:08:02.946 END TEST accel_decomp 00:08:02.946 ************************************ 00:08:02.946 10:15:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:02.946 10:15:39 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:02.946 10:15:39 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:02.946 10:15:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.946 10:15:39 accel -- common/autotest_common.sh@10 -- # set +x 00:08:02.946 ************************************ 00:08:02.946 START TEST accel_decomp_full 00:08:02.946 ************************************ 00:08:02.946 10:15:39 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:08:02.946 10:15:39 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:08:02.946 [2024-07-15 10:15:39.888340] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:02.946 [2024-07-15 10:15:39.888424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2748153 ] 00:08:02.946 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.946 [2024-07-15 10:15:39.957724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.947 [2024-07-15 10:15:40.030259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:02.947 10:15:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:04.328 10:15:41 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:04.329 10:15:41 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.329 00:08:04.329 real 0m1.312s 00:08:04.329 user 0m1.212s 00:08:04.329 sys 0m0.111s 00:08:04.329 10:15:41 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.329 10:15:41 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:08:04.329 ************************************ 00:08:04.329 END TEST accel_decomp_full 00:08:04.329 ************************************ 00:08:04.329 10:15:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:04.329 10:15:41 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:04.329 10:15:41 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:04.329 10:15:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.329 10:15:41 accel -- common/autotest_common.sh@10 -- # set +x 00:08:04.329 ************************************ 00:08:04.329 START TEST accel_decomp_mcore 00:08:04.329 ************************************ 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:04.329 [2024-07-15 10:15:41.270465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:04.329 [2024-07-15 10:15:41.270561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2748504 ] 00:08:04.329 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.329 [2024-07-15 10:15:41.340264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:04.329 [2024-07-15 10:15:41.412462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.329 [2024-07-15 10:15:41.412572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.329 [2024-07-15 10:15:41.412726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.329 [2024-07-15 10:15:41.412726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:04.329 10:15:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.710 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.710 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.710 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.710 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.710 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.710 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.710 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.710 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.710 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.710 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:05.711 00:08:05.711 real 0m1.312s 00:08:05.711 user 0m4.443s 00:08:05.711 sys 0m0.116s 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:05.711 10:15:42 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:05.711 ************************************ 00:08:05.711 END TEST accel_decomp_mcore 00:08:05.711 ************************************ 00:08:05.711 10:15:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:05.711 10:15:42 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.711 10:15:42 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:05.711 10:15:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.711 10:15:42 accel -- common/autotest_common.sh@10 -- # set +x 00:08:05.711 ************************************ 00:08:05.711 START TEST accel_decomp_full_mcore 00:08:05.711 ************************************ 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:08:05.711 [2024-07-15 10:15:42.654092] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:05.711 [2024-07-15 10:15:42.654181] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2748860 ] 00:08:05.711 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.711 [2024-07-15 10:15:42.724631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:05.711 [2024-07-15 10:15:42.795270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.711 [2024-07-15 10:15:42.795523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.711 [2024-07-15 10:15:42.795524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:05.711 [2024-07-15 10:15:42.795362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:05.711 10:15:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.091 00:08:07.091 real 0m1.324s 00:08:07.091 user 0m4.493s 00:08:07.091 sys 0m0.123s 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.091 10:15:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:08:07.091 ************************************ 00:08:07.091 END TEST accel_decomp_full_mcore 00:08:07.091 ************************************ 00:08:07.091 10:15:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:07.091 10:15:43 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.091 10:15:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:08:07.091 10:15:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.091 10:15:43 accel -- common/autotest_common.sh@10 -- # set +x 00:08:07.091 ************************************ 00:08:07.091 START TEST accel_decomp_mthread 00:08:07.091 ************************************ 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:07.091 [2024-07-15 10:15:44.052634] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:07.091 [2024-07-15 10:15:44.052729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2749127 ] 00:08:07.091 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.091 [2024-07-15 10:15:44.122347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.091 [2024-07-15 10:15:44.191137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:08:07.091 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:07.092 10:15:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.475 00:08:08.475 real 0m1.306s 00:08:08.475 user 0m1.210s 00:08:08.475 sys 0m0.109s 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.475 10:15:45 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:08.475 ************************************ 00:08:08.475 END TEST accel_decomp_mthread 00:08:08.475 ************************************ 00:08:08.475 10:15:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:08.475 10:15:45 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.475 10:15:45 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:08:08.475 10:15:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.475 10:15:45 accel -- common/autotest_common.sh@10 -- # set +x 00:08:08.475 ************************************ 00:08:08.475 START TEST accel_decomp_full_mthread 00:08:08.475 ************************************ 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:08:08.475 [2024-07-15 10:15:45.430317] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:08.475 [2024-07-15 10:15:45.430407] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2749320 ] 00:08:08.475 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.475 [2024-07-15 10:15:45.500411] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.475 [2024-07-15 10:15:45.570097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:08.475 10:15:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.858 00:08:09.858 real 0m1.335s 00:08:09.858 user 0m1.233s 00:08:09.858 sys 0m0.115s 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.858 10:15:46 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:08:09.858 ************************************ 00:08:09.858 END TEST accel_decomp_full_mthread 00:08:09.858 ************************************ 00:08:09.858 10:15:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:09.858 10:15:46 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:09.858 10:15:46 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:09.858 10:15:46 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:09.858 10:15:46 accel -- accel/accel.sh@137 -- # build_accel_config 00:08:09.858 10:15:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.858 10:15:46 accel -- common/autotest_common.sh@10 -- # set +x 00:08:09.858 10:15:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.858 10:15:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.858 10:15:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.858 10:15:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.858 10:15:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.858 10:15:46 accel -- accel/accel.sh@40 -- # local IFS=, 00:08:09.858 10:15:46 accel -- accel/accel.sh@41 -- # jq -r . 00:08:09.858 ************************************ 00:08:09.858 START TEST accel_dif_functional_tests 00:08:09.858 ************************************ 00:08:09.858 10:15:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:09.858 [2024-07-15 10:15:46.866800] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:09.858 [2024-07-15 10:15:46.866876] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2749604 ] 00:08:09.858 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.858 [2024-07-15 10:15:46.937725] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.858 [2024-07-15 10:15:47.010833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.858 [2024-07-15 10:15:47.010942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.858 [2024-07-15 10:15:47.010945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.118 00:08:10.118 00:08:10.118 CUnit - A unit testing framework for C - Version 2.1-3 00:08:10.118 http://cunit.sourceforge.net/ 00:08:10.118 00:08:10.118 00:08:10.118 Suite: accel_dif 00:08:10.118 Test: verify: DIF generated, GUARD check ...passed 00:08:10.118 Test: verify: DIF generated, APPTAG check ...passed 00:08:10.118 Test: verify: DIF generated, REFTAG check ...passed 00:08:10.118 Test: verify: DIF not generated, GUARD check ...[2024-07-15 10:15:47.066687] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:10.118 passed 00:08:10.118 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 10:15:47.066733] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:10.118 passed 00:08:10.118 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 10:15:47.066754] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:10.118 passed 00:08:10.118 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:10.118 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 10:15:47.066805] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:10.118 passed 00:08:10.118 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:10.118 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:10.118 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:10.118 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 10:15:47.066917] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:10.118 passed 00:08:10.118 Test: verify copy: DIF generated, GUARD check ...passed 00:08:10.118 Test: verify copy: DIF generated, APPTAG check ...passed 00:08:10.118 Test: verify copy: DIF generated, REFTAG check ...passed 00:08:10.118 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 10:15:47.067037] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:10.118 passed 00:08:10.118 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 10:15:47.067059] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:10.118 passed 00:08:10.118 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 10:15:47.067081] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:10.118 passed 00:08:10.118 Test: generate copy: DIF generated, GUARD check ...passed 00:08:10.118 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:10.118 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:10.118 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:10.118 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:10.118 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:10.118 Test: generate copy: iovecs-len validate ...[2024-07-15 10:15:47.067289] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:10.118 passed 00:08:10.118 Test: generate copy: buffer alignment validate ...passed 00:08:10.118 00:08:10.118 Run Summary: Type Total Ran Passed Failed Inactive 00:08:10.118 suites 1 1 n/a 0 0 00:08:10.118 tests 26 26 26 0 0 00:08:10.118 asserts 115 115 115 0 n/a 00:08:10.118 00:08:10.118 Elapsed time = 0.002 seconds 00:08:10.118 00:08:10.118 real 0m0.372s 00:08:10.118 user 0m0.488s 00:08:10.118 sys 0m0.146s 00:08:10.118 10:15:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.118 10:15:47 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:08:10.118 ************************************ 00:08:10.118 END TEST accel_dif_functional_tests 00:08:10.118 ************************************ 00:08:10.118 10:15:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:08:10.118 00:08:10.118 real 0m30.313s 00:08:10.118 user 0m33.732s 00:08:10.118 sys 0m4.339s 00:08:10.118 10:15:47 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.118 10:15:47 accel -- common/autotest_common.sh@10 -- # set +x 00:08:10.118 ************************************ 00:08:10.118 END TEST accel 00:08:10.118 ************************************ 00:08:10.118 10:15:47 -- common/autotest_common.sh@1142 -- # return 0 00:08:10.118 10:15:47 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:10.118 10:15:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:10.118 10:15:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.118 10:15:47 -- common/autotest_common.sh@10 -- # set +x 00:08:10.118 ************************************ 00:08:10.118 START TEST accel_rpc 00:08:10.118 ************************************ 00:08:10.118 10:15:47 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:10.379 * Looking for test storage... 00:08:10.379 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:08:10.379 10:15:47 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:10.379 10:15:47 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2749878 00:08:10.379 10:15:47 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2749878 00:08:10.379 10:15:47 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:10.379 10:15:47 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 2749878 ']' 00:08:10.379 10:15:47 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.379 10:15:47 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:10.379 10:15:47 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.379 10:15:47 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:10.379 10:15:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.379 [2024-07-15 10:15:47.442133] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:10.379 [2024-07-15 10:15:47.442200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2749878 ] 00:08:10.379 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.379 [2024-07-15 10:15:47.511085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.640 [2024-07-15 10:15:47.582151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.209 10:15:48 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:11.209 10:15:48 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:11.209 10:15:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:11.209 10:15:48 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:11.209 10:15:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:11.209 10:15:48 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:11.209 10:15:48 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:11.209 10:15:48 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:11.209 10:15:48 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.210 10:15:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.210 ************************************ 00:08:11.210 START TEST accel_assign_opcode 00:08:11.210 ************************************ 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:11.210 [2024-07-15 10:15:48.240077] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:11.210 [2024-07-15 10:15:48.252104] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:11.210 10:15:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:08:11.470 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:11.470 software 00:08:11.470 00:08:11.470 real 0m0.211s 00:08:11.470 user 0m0.050s 00:08:11.470 sys 0m0.010s 00:08:11.470 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.470 10:15:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:08:11.470 ************************************ 00:08:11.470 END TEST accel_assign_opcode 00:08:11.470 ************************************ 00:08:11.470 10:15:48 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:08:11.470 10:15:48 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2749878 00:08:11.470 10:15:48 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 2749878 ']' 00:08:11.470 10:15:48 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 2749878 00:08:11.470 10:15:48 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:08:11.470 10:15:48 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:11.470 10:15:48 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2749878 00:08:11.470 10:15:48 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:11.470 10:15:48 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:11.470 10:15:48 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2749878' 00:08:11.470 killing process with pid 2749878 00:08:11.470 10:15:48 accel_rpc -- common/autotest_common.sh@967 -- # kill 2749878 00:08:11.470 10:15:48 accel_rpc -- common/autotest_common.sh@972 -- # wait 2749878 00:08:11.730 00:08:11.730 real 0m1.452s 00:08:11.730 user 0m1.533s 00:08:11.730 sys 0m0.397s 00:08:11.730 10:15:48 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.730 10:15:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.730 ************************************ 00:08:11.730 END TEST accel_rpc 00:08:11.730 ************************************ 00:08:11.730 10:15:48 -- common/autotest_common.sh@1142 -- # return 0 00:08:11.730 10:15:48 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:11.730 10:15:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:11.730 10:15:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.730 10:15:48 -- common/autotest_common.sh@10 -- # set +x 00:08:11.730 ************************************ 00:08:11.730 START TEST app_cmdline 00:08:11.730 ************************************ 00:08:11.730 10:15:48 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:11.730 * Looking for test storage... 00:08:11.730 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:11.730 10:15:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:11.730 10:15:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2750209 00:08:11.730 10:15:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2750209 00:08:11.730 10:15:48 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:11.730 10:15:48 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 2750209 ']' 00:08:11.730 10:15:48 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.731 10:15:48 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.731 10:15:48 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.731 10:15:48 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.731 10:15:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:11.991 [2024-07-15 10:15:48.987308] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:11.991 [2024-07-15 10:15:48.987387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750209 ] 00:08:11.991 EAL: No free 2048 kB hugepages reported on node 1 00:08:11.991 [2024-07-15 10:15:49.058827] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.991 [2024-07-15 10:15:49.133888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.561 10:15:49 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.561 10:15:49 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:08:12.561 10:15:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:12.821 { 00:08:12.821 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:08:12.821 "fields": { 00:08:12.821 "major": 24, 00:08:12.821 "minor": 9, 00:08:12.821 "patch": 0, 00:08:12.821 "suffix": "-pre", 00:08:12.821 "commit": "719d03c6a" 00:08:12.821 } 00:08:12.821 } 00:08:12.821 10:15:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:12.821 10:15:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:12.821 10:15:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:12.821 10:15:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:12.821 10:15:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:12.821 10:15:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:12.821 10:15:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.821 10:15:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:12.821 10:15:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:12.821 10:15:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:12.821 10:15:49 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:13.081 request: 00:08:13.082 { 00:08:13.082 "method": "env_dpdk_get_mem_stats", 00:08:13.082 "req_id": 1 00:08:13.082 } 00:08:13.082 Got JSON-RPC error response 00:08:13.082 response: 00:08:13.082 { 00:08:13.082 "code": -32601, 00:08:13.082 "message": "Method not found" 00:08:13.082 } 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.082 10:15:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2750209 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 2750209 ']' 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 2750209 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2750209 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2750209' 00:08:13.082 killing process with pid 2750209 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@967 -- # kill 2750209 00:08:13.082 10:15:50 app_cmdline -- common/autotest_common.sh@972 -- # wait 2750209 00:08:13.343 00:08:13.343 real 0m1.523s 00:08:13.343 user 0m1.775s 00:08:13.343 sys 0m0.427s 00:08:13.343 10:15:50 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.343 10:15:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:13.343 ************************************ 00:08:13.343 END TEST app_cmdline 00:08:13.343 ************************************ 00:08:13.343 10:15:50 -- common/autotest_common.sh@1142 -- # return 0 00:08:13.343 10:15:50 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:13.343 10:15:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:13.343 10:15:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.343 10:15:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.343 ************************************ 00:08:13.343 START TEST version 00:08:13.343 ************************************ 00:08:13.343 10:15:50 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:13.343 * Looking for test storage... 00:08:13.343 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:13.343 10:15:50 version -- app/version.sh@17 -- # get_header_version major 00:08:13.343 10:15:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:13.343 10:15:50 version -- app/version.sh@14 -- # cut -f2 00:08:13.343 10:15:50 version -- app/version.sh@14 -- # tr -d '"' 00:08:13.343 10:15:50 version -- app/version.sh@17 -- # major=24 00:08:13.343 10:15:50 version -- app/version.sh@18 -- # get_header_version minor 00:08:13.343 10:15:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:13.343 10:15:50 version -- app/version.sh@14 -- # cut -f2 00:08:13.343 10:15:50 version -- app/version.sh@14 -- # tr -d '"' 00:08:13.343 10:15:50 version -- app/version.sh@18 -- # minor=9 00:08:13.604 10:15:50 version -- app/version.sh@19 -- # get_header_version patch 00:08:13.604 10:15:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:13.604 10:15:50 version -- app/version.sh@14 -- # cut -f2 00:08:13.604 10:15:50 version -- app/version.sh@14 -- # tr -d '"' 00:08:13.604 10:15:50 version -- app/version.sh@19 -- # patch=0 00:08:13.604 10:15:50 version -- app/version.sh@20 -- # get_header_version suffix 00:08:13.604 10:15:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:13.604 10:15:50 version -- app/version.sh@14 -- # cut -f2 00:08:13.604 10:15:50 version -- app/version.sh@14 -- # tr -d '"' 00:08:13.604 10:15:50 version -- app/version.sh@20 -- # suffix=-pre 00:08:13.604 10:15:50 version -- app/version.sh@22 -- # version=24.9 00:08:13.604 10:15:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:13.604 10:15:50 version -- app/version.sh@28 -- # version=24.9rc0 00:08:13.604 10:15:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:13.604 10:15:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:13.604 10:15:50 version -- app/version.sh@30 -- # py_version=24.9rc0 00:08:13.604 10:15:50 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:08:13.604 00:08:13.604 real 0m0.176s 00:08:13.604 user 0m0.086s 00:08:13.604 sys 0m0.129s 00:08:13.604 10:15:50 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.604 10:15:50 version -- common/autotest_common.sh@10 -- # set +x 00:08:13.604 ************************************ 00:08:13.604 END TEST version 00:08:13.604 ************************************ 00:08:13.604 10:15:50 -- common/autotest_common.sh@1142 -- # return 0 00:08:13.604 10:15:50 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:13.604 10:15:50 -- spdk/autotest.sh@198 -- # uname -s 00:08:13.604 10:15:50 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:13.604 10:15:50 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:13.604 10:15:50 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:13.604 10:15:50 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:08:13.604 10:15:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:13.604 10:15:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:13.604 10:15:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:13.604 10:15:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.604 10:15:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:13.604 10:15:50 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:13.604 10:15:50 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:08:13.604 10:15:50 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:08:13.604 10:15:50 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:08:13.604 10:15:50 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:13.604 10:15:50 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:13.604 10:15:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.604 10:15:50 -- common/autotest_common.sh@10 -- # set +x 00:08:13.604 ************************************ 00:08:13.604 START TEST nvmf_rdma 00:08:13.604 ************************************ 00:08:13.604 10:15:50 nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:13.865 * Looking for test storage... 00:08:13.865 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.865 10:15:50 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:13.866 10:15:50 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.866 10:15:50 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.866 10:15:50 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.866 10:15:50 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.866 10:15:50 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.866 10:15:50 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.866 10:15:50 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:08:13.866 10:15:50 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:13.866 10:15:50 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:13.866 10:15:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:13.866 10:15:50 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:13.866 10:15:50 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:13.866 10:15:50 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.866 10:15:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:13.866 ************************************ 00:08:13.866 START TEST nvmf_example 00:08:13.866 ************************************ 00:08:13.866 10:15:50 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:13.866 * Looking for test storage... 00:08:13.866 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:08:13.866 10:15:51 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:22.007 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:22.007 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:22.007 Found net devices under 0000:98:00.0: mlx_0_0 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:22.007 Found net devices under 0000:98:00.1: mlx_0_1 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:22.007 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:22.008 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:22.008 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:08:22.008 altname enp152s0f0np0 00:08:22.008 altname ens817f0np0 00:08:22.008 inet 192.168.100.8/24 scope global mlx_0_0 00:08:22.008 valid_lft forever preferred_lft forever 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:22.008 10:15:58 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:22.008 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:22.008 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:08:22.008 altname enp152s0f1np1 00:08:22.008 altname ens817f1np1 00:08:22.008 inet 192.168.100.9/24 scope global mlx_0_1 00:08:22.008 valid_lft forever preferred_lft forever 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:22.008 192.168.100.9' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:22.008 192.168.100.9' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:22.008 192.168.100.9' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2754878 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2754878 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 2754878 ']' 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.008 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:22.008 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.949 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.949 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:22.949 10:15:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:22.949 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:22.949 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:22.949 10:15:59 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:22.949 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:22.949 10:15:59 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:23.210 10:16:00 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:23.210 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.445 Initializing NVMe Controllers 00:08:35.445 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:35.445 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:35.445 Initialization complete. Launching workers. 00:08:35.445 ======================================================== 00:08:35.445 Latency(us) 00:08:35.445 Device Information : IOPS MiB/s Average min max 00:08:35.445 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25876.04 101.08 2472.99 672.86 19996.36 00:08:35.445 ======================================================== 00:08:35.445 Total : 25876.04 101.08 2472.99 672.86 19996.36 00:08:35.445 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:35.445 rmmod nvme_rdma 00:08:35.445 rmmod nvme_fabrics 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2754878 ']' 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2754878 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 2754878 ']' 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 2754878 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2754878 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2754878' 00:08:35.445 killing process with pid 2754878 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@967 -- # kill 2754878 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@972 -- # wait 2754878 00:08:35.445 nvmf threads initialize successfully 00:08:35.445 bdev subsystem init successfully 00:08:35.445 created a nvmf target service 00:08:35.445 create targets's poll groups done 00:08:35.445 all subsystems of target started 00:08:35.445 nvmf target is running 00:08:35.445 all subsystems of target stopped 00:08:35.445 destroy targets's poll groups done 00:08:35.445 destroyed the nvmf target service 00:08:35.445 bdev subsystem finish successfully 00:08:35.445 nvmf threads destroy successfully 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:35.445 00:08:35.445 real 0m20.916s 00:08:35.445 user 0m52.444s 00:08:35.445 sys 0m6.361s 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:35.445 10:16:11 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:35.445 ************************************ 00:08:35.445 END TEST nvmf_example 00:08:35.445 ************************************ 00:08:35.445 10:16:11 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:35.445 10:16:11 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:35.445 10:16:11 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:35.445 10:16:11 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.445 10:16:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:35.445 ************************************ 00:08:35.445 START TEST nvmf_filesystem 00:08:35.445 ************************************ 00:08:35.445 10:16:11 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:35.445 * Looking for test storage... 00:08:35.445 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:35.445 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:35.446 #define SPDK_CONFIG_H 00:08:35.446 #define SPDK_CONFIG_APPS 1 00:08:35.446 #define SPDK_CONFIG_ARCH native 00:08:35.446 #undef SPDK_CONFIG_ASAN 00:08:35.446 #undef SPDK_CONFIG_AVAHI 00:08:35.446 #undef SPDK_CONFIG_CET 00:08:35.446 #define SPDK_CONFIG_COVERAGE 1 00:08:35.446 #define SPDK_CONFIG_CROSS_PREFIX 00:08:35.446 #undef SPDK_CONFIG_CRYPTO 00:08:35.446 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:35.446 #undef SPDK_CONFIG_CUSTOMOCF 00:08:35.446 #undef SPDK_CONFIG_DAOS 00:08:35.446 #define SPDK_CONFIG_DAOS_DIR 00:08:35.446 #define SPDK_CONFIG_DEBUG 1 00:08:35.446 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:35.446 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:08:35.446 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:35.446 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:35.446 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:35.446 #undef SPDK_CONFIG_DPDK_UADK 00:08:35.446 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:35.446 #define SPDK_CONFIG_EXAMPLES 1 00:08:35.446 #undef SPDK_CONFIG_FC 00:08:35.446 #define SPDK_CONFIG_FC_PATH 00:08:35.446 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:35.446 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:35.446 #undef SPDK_CONFIG_FUSE 00:08:35.446 #undef SPDK_CONFIG_FUZZER 00:08:35.446 #define SPDK_CONFIG_FUZZER_LIB 00:08:35.446 #undef SPDK_CONFIG_GOLANG 00:08:35.446 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:35.446 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:35.446 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:35.446 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:35.446 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:35.446 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:35.446 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:35.446 #define SPDK_CONFIG_IDXD 1 00:08:35.446 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:35.446 #undef SPDK_CONFIG_IPSEC_MB 00:08:35.446 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:35.446 #define SPDK_CONFIG_ISAL 1 00:08:35.446 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:35.446 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:35.446 #define SPDK_CONFIG_LIBDIR 00:08:35.446 #undef SPDK_CONFIG_LTO 00:08:35.446 #define SPDK_CONFIG_MAX_LCORES 128 00:08:35.446 #define SPDK_CONFIG_NVME_CUSE 1 00:08:35.446 #undef SPDK_CONFIG_OCF 00:08:35.446 #define SPDK_CONFIG_OCF_PATH 00:08:35.446 #define SPDK_CONFIG_OPENSSL_PATH 00:08:35.446 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:35.446 #define SPDK_CONFIG_PGO_DIR 00:08:35.446 #undef SPDK_CONFIG_PGO_USE 00:08:35.446 #define SPDK_CONFIG_PREFIX /usr/local 00:08:35.446 #undef SPDK_CONFIG_RAID5F 00:08:35.446 #undef SPDK_CONFIG_RBD 00:08:35.446 #define SPDK_CONFIG_RDMA 1 00:08:35.446 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:35.446 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:35.446 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:35.446 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:35.446 #define SPDK_CONFIG_SHARED 1 00:08:35.446 #undef SPDK_CONFIG_SMA 00:08:35.446 #define SPDK_CONFIG_TESTS 1 00:08:35.446 #undef SPDK_CONFIG_TSAN 00:08:35.446 #define SPDK_CONFIG_UBLK 1 00:08:35.446 #define SPDK_CONFIG_UBSAN 1 00:08:35.446 #undef SPDK_CONFIG_UNIT_TESTS 00:08:35.446 #undef SPDK_CONFIG_URING 00:08:35.446 #define SPDK_CONFIG_URING_PATH 00:08:35.446 #undef SPDK_CONFIG_URING_ZNS 00:08:35.446 #undef SPDK_CONFIG_USDT 00:08:35.446 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:35.446 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:35.446 #undef SPDK_CONFIG_VFIO_USER 00:08:35.446 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:35.446 #define SPDK_CONFIG_VHOST 1 00:08:35.446 #define SPDK_CONFIG_VIRTIO 1 00:08:35.446 #undef SPDK_CONFIG_VTUNE 00:08:35.446 #define SPDK_CONFIG_VTUNE_DIR 00:08:35.446 #define SPDK_CONFIG_WERROR 1 00:08:35.446 #define SPDK_CONFIG_WPDK_DIR 00:08:35.446 #undef SPDK_CONFIG_XNVME 00:08:35.446 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:35.446 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:35.447 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 2757624 ]] 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 2757624 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.4v7R1D 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4v7R1D/tests/target /tmp/spdk.4v7R1D 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956157952 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4328271872 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.448 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122809004032 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6561976320 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864245248 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9953280 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64683921408 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1568768 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:35.449 * Looking for test storage... 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122809004032 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8776568832 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:35.449 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.449 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:35.450 10:16:12 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:08:43.595 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:08:43.595 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:08:43.595 Found net devices under 0000:98:00.0: mlx_0_0 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:08:43.595 Found net devices under 0000:98:00.1: mlx_0_1 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.595 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:43.596 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:43.596 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:08:43.596 altname enp152s0f0np0 00:08:43.596 altname ens817f0np0 00:08:43.596 inet 192.168.100.8/24 scope global mlx_0_0 00:08:43.596 valid_lft forever preferred_lft forever 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:43.596 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:43.596 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:08:43.596 altname enp152s0f1np1 00:08:43.596 altname ens817f1np1 00:08:43.596 inet 192.168.100.9/24 scope global mlx_0_1 00:08:43.596 valid_lft forever preferred_lft forever 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:43.596 192.168.100.9' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:43.596 192.168.100.9' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:43.596 192.168.100.9' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.596 10:16:19 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:43.596 ************************************ 00:08:43.596 START TEST nvmf_filesystem_no_in_capsule 00:08:43.596 ************************************ 00:08:43.596 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:43.596 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:43.596 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:43.596 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:43.596 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:43.596 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.596 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2761762 00:08:43.596 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2761762 00:08:43.596 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2761762 ']' 00:08:43.597 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.597 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.597 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.597 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.597 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.597 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:43.597 [2024-07-15 10:16:20.092968] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:43.597 [2024-07-15 10:16:20.093034] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.597 EAL: No free 2048 kB hugepages reported on node 1 00:08:43.597 [2024-07-15 10:16:20.166107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.597 [2024-07-15 10:16:20.245606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.597 [2024-07-15 10:16:20.245651] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.597 [2024-07-15 10:16:20.245658] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.597 [2024-07-15 10:16:20.245665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.597 [2024-07-15 10:16:20.245671] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.597 [2024-07-15 10:16:20.245809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.597 [2024-07-15 10:16:20.245934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.597 [2024-07-15 10:16:20.246091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.597 [2024-07-15 10:16:20.246092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.857 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:43.857 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:43.857 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.857 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:43.857 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.858 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.858 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:43.858 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:43.858 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.858 10:16:20 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.858 [2024-07-15 10:16:20.922846] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:43.858 [2024-07-15 10:16:20.954889] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a88200/0x1a8c6f0) succeed. 00:08:43.858 [2024-07-15 10:16:20.969733] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a89840/0x1acdd80) succeed. 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.122 Malloc1 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.122 [2024-07-15 10:16:21.206525] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.122 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:44.122 { 00:08:44.122 "name": "Malloc1", 00:08:44.122 "aliases": [ 00:08:44.122 "a3665dae-2d75-43df-b91e-bd4830fedee9" 00:08:44.122 ], 00:08:44.122 "product_name": "Malloc disk", 00:08:44.122 "block_size": 512, 00:08:44.122 "num_blocks": 1048576, 00:08:44.122 "uuid": "a3665dae-2d75-43df-b91e-bd4830fedee9", 00:08:44.122 "assigned_rate_limits": { 00:08:44.122 "rw_ios_per_sec": 0, 00:08:44.122 "rw_mbytes_per_sec": 0, 00:08:44.122 "r_mbytes_per_sec": 0, 00:08:44.123 "w_mbytes_per_sec": 0 00:08:44.123 }, 00:08:44.123 "claimed": true, 00:08:44.123 "claim_type": "exclusive_write", 00:08:44.123 "zoned": false, 00:08:44.123 "supported_io_types": { 00:08:44.123 "read": true, 00:08:44.123 "write": true, 00:08:44.123 "unmap": true, 00:08:44.123 "flush": true, 00:08:44.123 "reset": true, 00:08:44.123 "nvme_admin": false, 00:08:44.123 "nvme_io": false, 00:08:44.123 "nvme_io_md": false, 00:08:44.123 "write_zeroes": true, 00:08:44.123 "zcopy": true, 00:08:44.123 "get_zone_info": false, 00:08:44.123 "zone_management": false, 00:08:44.123 "zone_append": false, 00:08:44.123 "compare": false, 00:08:44.123 "compare_and_write": false, 00:08:44.123 "abort": true, 00:08:44.123 "seek_hole": false, 00:08:44.123 "seek_data": false, 00:08:44.123 "copy": true, 00:08:44.123 "nvme_iov_md": false 00:08:44.123 }, 00:08:44.123 "memory_domains": [ 00:08:44.123 { 00:08:44.123 "dma_device_id": "system", 00:08:44.123 "dma_device_type": 1 00:08:44.123 }, 00:08:44.123 { 00:08:44.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.123 "dma_device_type": 2 00:08:44.123 } 00:08:44.123 ], 00:08:44.123 "driver_specific": {} 00:08:44.123 } 00:08:44.123 ]' 00:08:44.123 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:44.123 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:44.123 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:44.443 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:44.443 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:44.443 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:44.443 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:44.443 10:16:21 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:45.835 10:16:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.835 10:16:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.835 10:16:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.835 10:16:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:45.835 10:16:22 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:47.750 10:16:24 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.693 ************************************ 00:08:48.693 START TEST filesystem_ext4 00:08:48.693 ************************************ 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:48.693 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:48.693 mke2fs 1.46.5 (30-Dec-2021) 00:08:48.954 Discarding device blocks: 0/522240 done 00:08:48.954 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:48.954 Filesystem UUID: 0e54a077-dc62-4287-ac64-a24a7cba1131 00:08:48.954 Superblock backups stored on blocks: 00:08:48.954 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:48.954 00:08:48.954 Allocating group tables: 0/64 done 00:08:48.954 Writing inode tables: 0/64 done 00:08:48.954 Creating journal (8192 blocks): done 00:08:48.954 Writing superblocks and filesystem accounting information: 0/64 done 00:08:48.954 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2761762 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:48.954 00:08:48.954 real 0m0.128s 00:08:48.954 user 0m0.023s 00:08:48.954 sys 0m0.046s 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.954 10:16:25 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:48.954 ************************************ 00:08:48.954 END TEST filesystem_ext4 00:08:48.954 ************************************ 00:08:48.954 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:48.954 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:48.954 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:48.954 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.954 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.954 ************************************ 00:08:48.954 START TEST filesystem_btrfs 00:08:48.955 ************************************ 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:48.955 btrfs-progs v6.6.2 00:08:48.955 See https://btrfs.readthedocs.io for more information. 00:08:48.955 00:08:48.955 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:48.955 NOTE: several default settings have changed in version 5.15, please make sure 00:08:48.955 this does not affect your deployments: 00:08:48.955 - DUP for metadata (-m dup) 00:08:48.955 - enabled no-holes (-O no-holes) 00:08:48.955 - enabled free-space-tree (-R free-space-tree) 00:08:48.955 00:08:48.955 Label: (null) 00:08:48.955 UUID: d2be4f3b-88bc-4362-b274-1226f88258a6 00:08:48.955 Node size: 16384 00:08:48.955 Sector size: 4096 00:08:48.955 Filesystem size: 510.00MiB 00:08:48.955 Block group profiles: 00:08:48.955 Data: single 8.00MiB 00:08:48.955 Metadata: DUP 32.00MiB 00:08:48.955 System: DUP 8.00MiB 00:08:48.955 SSD detected: yes 00:08:48.955 Zoned device: no 00:08:48.955 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:48.955 Runtime features: free-space-tree 00:08:48.955 Checksum: crc32c 00:08:48.955 Number of devices: 1 00:08:48.955 Devices: 00:08:48.955 ID SIZE PATH 00:08:48.955 1 510.00MiB /dev/nvme0n1p1 00:08:48.955 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:48.955 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2761762 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:49.216 00:08:49.216 real 0m0.133s 00:08:49.216 user 0m0.017s 00:08:49.216 sys 0m0.064s 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:49.216 ************************************ 00:08:49.216 END TEST filesystem_btrfs 00:08:49.216 ************************************ 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:49.216 ************************************ 00:08:49.216 START TEST filesystem_xfs 00:08:49.216 ************************************ 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:49.216 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:49.216 = sectsz=512 attr=2, projid32bit=1 00:08:49.216 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:49.216 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:49.216 data = bsize=4096 blocks=130560, imaxpct=25 00:08:49.216 = sunit=0 swidth=0 blks 00:08:49.216 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:49.216 log =internal log bsize=4096 blocks=16384, version=2 00:08:49.216 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:49.216 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:49.216 Discarding blocks...Done. 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:49.216 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:49.217 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:49.217 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:49.478 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2761762 00:08:49.478 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:49.478 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:49.478 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:49.478 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:49.478 00:08:49.478 real 0m0.155s 00:08:49.478 user 0m0.027s 00:08:49.478 sys 0m0.048s 00:08:49.478 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.478 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:49.478 ************************************ 00:08:49.478 END TEST filesystem_xfs 00:08:49.478 ************************************ 00:08:49.478 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:49.478 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:49.478 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:49.478 10:16:26 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.862 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:50.863 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2761762 00:08:50.863 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2761762 ']' 00:08:50.863 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2761762 00:08:50.863 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:50.863 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:50.863 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2761762 00:08:50.863 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:50.863 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:50.863 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2761762' 00:08:50.863 killing process with pid 2761762 00:08:50.863 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 2761762 00:08:50.863 10:16:27 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 2761762 00:08:50.863 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:50.863 00:08:50.863 real 0m8.018s 00:08:50.863 user 0m31.280s 00:08:50.863 sys 0m0.937s 00:08:50.863 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.863 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:50.863 ************************************ 00:08:50.863 END TEST nvmf_filesystem_no_in_capsule 00:08:50.863 ************************************ 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.124 ************************************ 00:08:51.124 START TEST nvmf_filesystem_in_capsule 00:08:51.124 ************************************ 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2763530 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2763530 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 2763530 ']' 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.124 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:51.124 [2024-07-15 10:16:28.195868] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:51.124 [2024-07-15 10:16:28.195913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.124 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.124 [2024-07-15 10:16:28.264063] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.386 [2024-07-15 10:16:28.329602] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.386 [2024-07-15 10:16:28.329638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.386 [2024-07-15 10:16:28.329645] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.386 [2024-07-15 10:16:28.329652] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.386 [2024-07-15 10:16:28.329661] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.386 [2024-07-15 10:16:28.329804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.386 [2024-07-15 10:16:28.329918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.386 [2024-07-15 10:16:28.330090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.386 [2024-07-15 10:16:28.330090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.958 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:51.958 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:51.958 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:51.958 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:51.958 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:51.958 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.958 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:51.958 10:16:28 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:51.958 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:51.958 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:51.958 [2024-07-15 10:16:29.041674] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1edd200/0x1ee16f0) succeed. 00:08:51.958 [2024-07-15 10:16:29.058257] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ede840/0x1f22d80) succeed. 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:52.220 Malloc1 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:52.220 [2024-07-15 10:16:29.289190] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:52.220 { 00:08:52.220 "name": "Malloc1", 00:08:52.220 "aliases": [ 00:08:52.220 "c5058ec6-af07-479f-9516-d166cf6291ac" 00:08:52.220 ], 00:08:52.220 "product_name": "Malloc disk", 00:08:52.220 "block_size": 512, 00:08:52.220 "num_blocks": 1048576, 00:08:52.220 "uuid": "c5058ec6-af07-479f-9516-d166cf6291ac", 00:08:52.220 "assigned_rate_limits": { 00:08:52.220 "rw_ios_per_sec": 0, 00:08:52.220 "rw_mbytes_per_sec": 0, 00:08:52.220 "r_mbytes_per_sec": 0, 00:08:52.220 "w_mbytes_per_sec": 0 00:08:52.220 }, 00:08:52.220 "claimed": true, 00:08:52.220 "claim_type": "exclusive_write", 00:08:52.220 "zoned": false, 00:08:52.220 "supported_io_types": { 00:08:52.220 "read": true, 00:08:52.220 "write": true, 00:08:52.220 "unmap": true, 00:08:52.220 "flush": true, 00:08:52.220 "reset": true, 00:08:52.220 "nvme_admin": false, 00:08:52.220 "nvme_io": false, 00:08:52.220 "nvme_io_md": false, 00:08:52.220 "write_zeroes": true, 00:08:52.220 "zcopy": true, 00:08:52.220 "get_zone_info": false, 00:08:52.220 "zone_management": false, 00:08:52.220 "zone_append": false, 00:08:52.220 "compare": false, 00:08:52.220 "compare_and_write": false, 00:08:52.220 "abort": true, 00:08:52.220 "seek_hole": false, 00:08:52.220 "seek_data": false, 00:08:52.220 "copy": true, 00:08:52.220 "nvme_iov_md": false 00:08:52.220 }, 00:08:52.220 "memory_domains": [ 00:08:52.220 { 00:08:52.220 "dma_device_id": "system", 00:08:52.220 "dma_device_type": 1 00:08:52.220 }, 00:08:52.220 { 00:08:52.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.220 "dma_device_type": 2 00:08:52.220 } 00:08:52.220 ], 00:08:52.220 "driver_specific": {} 00:08:52.220 } 00:08:52.220 ]' 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:52.220 10:16:29 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:54.135 10:16:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.135 10:16:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:54.135 10:16:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.135 10:16:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:54.135 10:16:30 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:56.041 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:56.041 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:56.041 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:56.041 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:56.041 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:56.041 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:56.041 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:56.041 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:56.042 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:56.042 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:56.042 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:56.042 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:56.042 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:56.042 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:56.042 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:56.042 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:56.042 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:56.042 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:56.042 10:16:32 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:56.978 ************************************ 00:08:56.978 START TEST filesystem_in_capsule_ext4 00:08:56.978 ************************************ 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:56.978 10:16:33 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:56.978 mke2fs 1.46.5 (30-Dec-2021) 00:08:56.978 Discarding device blocks: 0/522240 done 00:08:56.978 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:56.978 Filesystem UUID: af98960e-0425-4b0f-b4e3-104aa8167691 00:08:56.978 Superblock backups stored on blocks: 00:08:56.978 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:56.978 00:08:56.978 Allocating group tables: 0/64 done 00:08:56.978 Writing inode tables: 0/64 done 00:08:56.978 Creating journal (8192 blocks): done 00:08:56.978 Writing superblocks and filesystem accounting information: 0/64 done 00:08:56.978 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2763530 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:56.978 00:08:56.978 real 0m0.134s 00:08:56.978 user 0m0.020s 00:08:56.978 sys 0m0.053s 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:56.978 ************************************ 00:08:56.978 END TEST filesystem_in_capsule_ext4 00:08:56.978 ************************************ 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:56.978 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:57.238 ************************************ 00:08:57.238 START TEST filesystem_in_capsule_btrfs 00:08:57.238 ************************************ 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:57.238 btrfs-progs v6.6.2 00:08:57.238 See https://btrfs.readthedocs.io for more information. 00:08:57.238 00:08:57.238 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:57.238 NOTE: several default settings have changed in version 5.15, please make sure 00:08:57.238 this does not affect your deployments: 00:08:57.238 - DUP for metadata (-m dup) 00:08:57.238 - enabled no-holes (-O no-holes) 00:08:57.238 - enabled free-space-tree (-R free-space-tree) 00:08:57.238 00:08:57.238 Label: (null) 00:08:57.238 UUID: b0d57188-a036-4618-abe5-6d721968fe6d 00:08:57.238 Node size: 16384 00:08:57.238 Sector size: 4096 00:08:57.238 Filesystem size: 510.00MiB 00:08:57.238 Block group profiles: 00:08:57.238 Data: single 8.00MiB 00:08:57.238 Metadata: DUP 32.00MiB 00:08:57.238 System: DUP 8.00MiB 00:08:57.238 SSD detected: yes 00:08:57.238 Zoned device: no 00:08:57.238 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:57.238 Runtime features: free-space-tree 00:08:57.238 Checksum: crc32c 00:08:57.238 Number of devices: 1 00:08:57.238 Devices: 00:08:57.238 ID SIZE PATH 00:08:57.238 1 510.00MiB /dev/nvme0n1p1 00:08:57.238 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2763530 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:57.238 00:08:57.238 real 0m0.134s 00:08:57.238 user 0m0.019s 00:08:57.238 sys 0m0.062s 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:57.238 ************************************ 00:08:57.238 END TEST filesystem_in_capsule_btrfs 00:08:57.238 ************************************ 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:57.238 ************************************ 00:08:57.238 START TEST filesystem_in_capsule_xfs 00:08:57.238 ************************************ 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:57.238 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:57.498 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:57.498 = sectsz=512 attr=2, projid32bit=1 00:08:57.498 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:57.498 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:57.498 data = bsize=4096 blocks=130560, imaxpct=25 00:08:57.498 = sunit=0 swidth=0 blks 00:08:57.498 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:57.498 log =internal log bsize=4096 blocks=16384, version=2 00:08:57.498 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:57.498 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:57.498 Discarding blocks...Done. 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2763530 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:57.498 00:08:57.498 real 0m0.147s 00:08:57.498 user 0m0.023s 00:08:57.498 sys 0m0.048s 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:57.498 ************************************ 00:08:57.498 END TEST filesystem_in_capsule_xfs 00:08:57.498 ************************************ 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:57.498 10:16:34 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:58.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2763530 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 2763530 ']' 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 2763530 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2763530 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2763530' 00:08:58.879 killing process with pid 2763530 00:08:58.879 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 2763530 00:08:58.880 10:16:35 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 2763530 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:59.139 00:08:59.139 real 0m8.098s 00:08:59.139 user 0m31.576s 00:08:59.139 sys 0m0.945s 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:59.139 ************************************ 00:08:59.139 END TEST nvmf_filesystem_in_capsule 00:08:59.139 ************************************ 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:59.139 rmmod nvme_rdma 00:08:59.139 rmmod nvme_fabrics 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:59.139 00:08:59.139 real 0m24.412s 00:08:59.139 user 1m5.213s 00:08:59.139 sys 0m7.909s 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.139 10:16:36 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.139 ************************************ 00:08:59.139 END TEST nvmf_filesystem 00:08:59.139 ************************************ 00:08:59.400 10:16:36 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:08:59.400 10:16:36 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:59.400 10:16:36 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:59.400 10:16:36 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.400 10:16:36 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:59.400 ************************************ 00:08:59.400 START TEST nvmf_target_discovery 00:08:59.400 ************************************ 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:59.400 * Looking for test storage... 00:08:59.400 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:59.400 10:16:36 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:59.401 10:16:36 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.538 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:07.538 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:09:07.538 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:07.538 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:07.538 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:07.538 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:07.538 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:07.538 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:07.539 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:07.539 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:07.539 Found net devices under 0000:98:00.0: mlx_0_0 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:07.539 Found net devices under 0000:98:00.1: mlx_0_1 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:07.539 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:07.540 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:07.540 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:07.540 altname enp152s0f0np0 00:09:07.540 altname ens817f0np0 00:09:07.540 inet 192.168.100.8/24 scope global mlx_0_0 00:09:07.540 valid_lft forever preferred_lft forever 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:07.540 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:07.540 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:07.540 altname enp152s0f1np1 00:09:07.540 altname ens817f1np1 00:09:07.540 inet 192.168.100.9/24 scope global mlx_0_1 00:09:07.540 valid_lft forever preferred_lft forever 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:07.540 192.168.100.9' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:07.540 192.168.100.9' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:07.540 192.168.100.9' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2769599 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2769599 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 2769599 ']' 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:07.540 10:16:44 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:07.540 [2024-07-15 10:16:44.289370] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:07.540 [2024-07-15 10:16:44.289440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.540 EAL: No free 2048 kB hugepages reported on node 1 00:09:07.540 [2024-07-15 10:16:44.365089] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.540 [2024-07-15 10:16:44.440227] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:07.540 [2024-07-15 10:16:44.440271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:07.540 [2024-07-15 10:16:44.440279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:07.540 [2024-07-15 10:16:44.440286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:07.540 [2024-07-15 10:16:44.440291] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:07.540 [2024-07-15 10:16:44.440357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.540 [2024-07-15 10:16:44.440492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.540 [2024-07-15 10:16:44.440649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.540 [2024-07-15 10:16:44.440650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.112 [2024-07-15 10:16:45.157896] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2269200/0x226d6f0) succeed. 00:09:08.112 [2024-07-15 10:16:45.172301] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x226a840/0x22aed80) succeed. 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.112 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 Null1 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 [2024-07-15 10:16:45.349071] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 Null2 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 Null3 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 Null4 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.373 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 4420 00:09:08.635 00:09:08.635 Discovery Log Number of Records 6, Generation counter 6 00:09:08.635 =====Discovery Log Entry 0====== 00:09:08.635 trtype: rdma 00:09:08.635 adrfam: ipv4 00:09:08.635 subtype: current discovery subsystem 00:09:08.635 treq: not required 00:09:08.635 portid: 0 00:09:08.635 trsvcid: 4420 00:09:08.635 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:08.635 traddr: 192.168.100.8 00:09:08.635 eflags: explicit discovery connections, duplicate discovery information 00:09:08.635 rdma_prtype: not specified 00:09:08.635 rdma_qptype: connected 00:09:08.635 rdma_cms: rdma-cm 00:09:08.635 rdma_pkey: 0x0000 00:09:08.635 =====Discovery Log Entry 1====== 00:09:08.635 trtype: rdma 00:09:08.635 adrfam: ipv4 00:09:08.635 subtype: nvme subsystem 00:09:08.635 treq: not required 00:09:08.635 portid: 0 00:09:08.635 trsvcid: 4420 00:09:08.635 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:08.635 traddr: 192.168.100.8 00:09:08.635 eflags: none 00:09:08.635 rdma_prtype: not specified 00:09:08.635 rdma_qptype: connected 00:09:08.635 rdma_cms: rdma-cm 00:09:08.635 rdma_pkey: 0x0000 00:09:08.635 =====Discovery Log Entry 2====== 00:09:08.635 trtype: rdma 00:09:08.635 adrfam: ipv4 00:09:08.635 subtype: nvme subsystem 00:09:08.635 treq: not required 00:09:08.635 portid: 0 00:09:08.635 trsvcid: 4420 00:09:08.635 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:08.635 traddr: 192.168.100.8 00:09:08.635 eflags: none 00:09:08.635 rdma_prtype: not specified 00:09:08.635 rdma_qptype: connected 00:09:08.635 rdma_cms: rdma-cm 00:09:08.635 rdma_pkey: 0x0000 00:09:08.635 =====Discovery Log Entry 3====== 00:09:08.635 trtype: rdma 00:09:08.635 adrfam: ipv4 00:09:08.635 subtype: nvme subsystem 00:09:08.635 treq: not required 00:09:08.635 portid: 0 00:09:08.635 trsvcid: 4420 00:09:08.635 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:08.635 traddr: 192.168.100.8 00:09:08.635 eflags: none 00:09:08.635 rdma_prtype: not specified 00:09:08.635 rdma_qptype: connected 00:09:08.635 rdma_cms: rdma-cm 00:09:08.635 rdma_pkey: 0x0000 00:09:08.635 =====Discovery Log Entry 4====== 00:09:08.635 trtype: rdma 00:09:08.635 adrfam: ipv4 00:09:08.635 subtype: nvme subsystem 00:09:08.635 treq: not required 00:09:08.635 portid: 0 00:09:08.635 trsvcid: 4420 00:09:08.635 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:08.635 traddr: 192.168.100.8 00:09:08.635 eflags: none 00:09:08.635 rdma_prtype: not specified 00:09:08.635 rdma_qptype: connected 00:09:08.635 rdma_cms: rdma-cm 00:09:08.635 rdma_pkey: 0x0000 00:09:08.635 =====Discovery Log Entry 5====== 00:09:08.635 trtype: rdma 00:09:08.635 adrfam: ipv4 00:09:08.635 subtype: discovery subsystem referral 00:09:08.635 treq: not required 00:09:08.635 portid: 0 00:09:08.635 trsvcid: 4430 00:09:08.635 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:08.635 traddr: 192.168.100.8 00:09:08.635 eflags: none 00:09:08.635 rdma_prtype: unrecognized 00:09:08.635 rdma_qptype: unrecognized 00:09:08.635 rdma_cms: unrecognized 00:09:08.635 rdma_pkey: 0x0000 00:09:08.635 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:08.635 Perform nvmf subsystem discovery via RPC 00:09:08.635 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:08.635 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.635 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.635 [ 00:09:08.635 { 00:09:08.635 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:08.635 "subtype": "Discovery", 00:09:08.635 "listen_addresses": [ 00:09:08.635 { 00:09:08.635 "trtype": "RDMA", 00:09:08.635 "adrfam": "IPv4", 00:09:08.635 "traddr": "192.168.100.8", 00:09:08.635 "trsvcid": "4420" 00:09:08.635 } 00:09:08.635 ], 00:09:08.635 "allow_any_host": true, 00:09:08.635 "hosts": [] 00:09:08.635 }, 00:09:08.635 { 00:09:08.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:08.635 "subtype": "NVMe", 00:09:08.635 "listen_addresses": [ 00:09:08.635 { 00:09:08.635 "trtype": "RDMA", 00:09:08.635 "adrfam": "IPv4", 00:09:08.635 "traddr": "192.168.100.8", 00:09:08.635 "trsvcid": "4420" 00:09:08.635 } 00:09:08.635 ], 00:09:08.635 "allow_any_host": true, 00:09:08.635 "hosts": [], 00:09:08.635 "serial_number": "SPDK00000000000001", 00:09:08.635 "model_number": "SPDK bdev Controller", 00:09:08.635 "max_namespaces": 32, 00:09:08.635 "min_cntlid": 1, 00:09:08.635 "max_cntlid": 65519, 00:09:08.635 "namespaces": [ 00:09:08.635 { 00:09:08.635 "nsid": 1, 00:09:08.635 "bdev_name": "Null1", 00:09:08.635 "name": "Null1", 00:09:08.635 "nguid": "5622127ED8A44C63AFC369EBD5FEA417", 00:09:08.635 "uuid": "5622127e-d8a4-4c63-afc3-69ebd5fea417" 00:09:08.635 } 00:09:08.635 ] 00:09:08.635 }, 00:09:08.635 { 00:09:08.636 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:08.636 "subtype": "NVMe", 00:09:08.636 "listen_addresses": [ 00:09:08.636 { 00:09:08.636 "trtype": "RDMA", 00:09:08.636 "adrfam": "IPv4", 00:09:08.636 "traddr": "192.168.100.8", 00:09:08.636 "trsvcid": "4420" 00:09:08.636 } 00:09:08.636 ], 00:09:08.636 "allow_any_host": true, 00:09:08.636 "hosts": [], 00:09:08.636 "serial_number": "SPDK00000000000002", 00:09:08.636 "model_number": "SPDK bdev Controller", 00:09:08.636 "max_namespaces": 32, 00:09:08.636 "min_cntlid": 1, 00:09:08.636 "max_cntlid": 65519, 00:09:08.636 "namespaces": [ 00:09:08.636 { 00:09:08.636 "nsid": 1, 00:09:08.636 "bdev_name": "Null2", 00:09:08.636 "name": "Null2", 00:09:08.636 "nguid": "02767EE8ED11421C96DFE41362B4F584", 00:09:08.636 "uuid": "02767ee8-ed11-421c-96df-e41362b4f584" 00:09:08.636 } 00:09:08.636 ] 00:09:08.636 }, 00:09:08.636 { 00:09:08.636 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:08.636 "subtype": "NVMe", 00:09:08.636 "listen_addresses": [ 00:09:08.636 { 00:09:08.636 "trtype": "RDMA", 00:09:08.636 "adrfam": "IPv4", 00:09:08.636 "traddr": "192.168.100.8", 00:09:08.636 "trsvcid": "4420" 00:09:08.636 } 00:09:08.636 ], 00:09:08.636 "allow_any_host": true, 00:09:08.636 "hosts": [], 00:09:08.636 "serial_number": "SPDK00000000000003", 00:09:08.636 "model_number": "SPDK bdev Controller", 00:09:08.636 "max_namespaces": 32, 00:09:08.636 "min_cntlid": 1, 00:09:08.636 "max_cntlid": 65519, 00:09:08.636 "namespaces": [ 00:09:08.636 { 00:09:08.636 "nsid": 1, 00:09:08.636 "bdev_name": "Null3", 00:09:08.636 "name": "Null3", 00:09:08.636 "nguid": "8B280B98D782459CA91E8CC1AB4D53A2", 00:09:08.636 "uuid": "8b280b98-d782-459c-a91e-8cc1ab4d53a2" 00:09:08.636 } 00:09:08.636 ] 00:09:08.636 }, 00:09:08.636 { 00:09:08.636 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:08.636 "subtype": "NVMe", 00:09:08.636 "listen_addresses": [ 00:09:08.636 { 00:09:08.636 "trtype": "RDMA", 00:09:08.636 "adrfam": "IPv4", 00:09:08.636 "traddr": "192.168.100.8", 00:09:08.636 "trsvcid": "4420" 00:09:08.636 } 00:09:08.636 ], 00:09:08.636 "allow_any_host": true, 00:09:08.636 "hosts": [], 00:09:08.636 "serial_number": "SPDK00000000000004", 00:09:08.636 "model_number": "SPDK bdev Controller", 00:09:08.636 "max_namespaces": 32, 00:09:08.636 "min_cntlid": 1, 00:09:08.636 "max_cntlid": 65519, 00:09:08.636 "namespaces": [ 00:09:08.636 { 00:09:08.636 "nsid": 1, 00:09:08.636 "bdev_name": "Null4", 00:09:08.636 "name": "Null4", 00:09:08.636 "nguid": "79A3AED745774C8CB6609EF2CDC605ED", 00:09:08.636 "uuid": "79a3aed7-4577-4c8c-b660-9ef2cdc605ed" 00:09:08.636 } 00:09:08.636 ] 00:09:08.636 } 00:09:08.636 ] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:08.636 rmmod nvme_rdma 00:09:08.636 rmmod nvme_fabrics 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2769599 ']' 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2769599 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 2769599 ']' 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 2769599 00:09:08.636 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:09:08.897 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:08.897 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2769599 00:09:08.897 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:08.897 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:08.897 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2769599' 00:09:08.897 killing process with pid 2769599 00:09:08.897 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 2769599 00:09:08.897 10:16:45 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 2769599 00:09:09.158 10:16:46 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:09.158 10:16:46 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:09.158 00:09:09.158 real 0m9.703s 00:09:09.158 user 0m8.856s 00:09:09.158 sys 0m6.143s 00:09:09.158 10:16:46 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.158 10:16:46 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:09.158 ************************************ 00:09:09.158 END TEST nvmf_target_discovery 00:09:09.158 ************************************ 00:09:09.158 10:16:46 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:09.158 10:16:46 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:09.158 10:16:46 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.158 10:16:46 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.158 10:16:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:09.158 ************************************ 00:09:09.158 START TEST nvmf_referrals 00:09:09.158 ************************************ 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:09.158 * Looking for test storage... 00:09:09.158 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.158 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.159 10:16:46 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:17.294 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:17.294 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:17.294 Found net devices under 0000:98:00.0: mlx_0_0 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:17.294 Found net devices under 0000:98:00.1: mlx_0_1 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:17.294 10:16:53 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:17.294 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:17.294 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:17.294 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:17.294 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:17.294 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:17.294 altname enp152s0f0np0 00:09:17.294 altname ens817f0np0 00:09:17.294 inet 192.168.100.8/24 scope global mlx_0_0 00:09:17.294 valid_lft forever preferred_lft forever 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:17.295 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:17.295 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:17.295 altname enp152s0f1np1 00:09:17.295 altname ens817f1np1 00:09:17.295 inet 192.168.100.9/24 scope global mlx_0_1 00:09:17.295 valid_lft forever preferred_lft forever 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:17.295 192.168.100.9' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:17.295 192.168.100.9' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:17.295 192.168.100.9' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2774132 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2774132 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 2774132 ']' 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.295 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.295 [2024-07-15 10:16:54.198843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:17.295 [2024-07-15 10:16:54.198917] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.295 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.295 [2024-07-15 10:16:54.267506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.295 [2024-07-15 10:16:54.333464] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.295 [2024-07-15 10:16:54.333500] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.295 [2024-07-15 10:16:54.333508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.295 [2024-07-15 10:16:54.333514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.295 [2024-07-15 10:16:54.333520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.295 [2024-07-15 10:16:54.333659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.295 [2024-07-15 10:16:54.333776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.295 [2024-07-15 10:16:54.333930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.295 [2024-07-15 10:16:54.333932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.863 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.863 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:17.863 10:16:54 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:17.863 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:17.863 10:16:54 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.863 10:16:55 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.864 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:17.864 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.864 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:17.864 [2024-07-15 10:16:55.048965] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc1d200/0xc216f0) succeed. 00:09:18.122 [2024-07-15 10:16:55.063724] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc1e840/0xc62d80) succeed. 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.123 [2024-07-15 10:16:55.191489] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.123 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:18.383 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:18.642 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:18.903 10:16:55 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:18.903 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:18.903 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:18.903 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:18.903 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:18.903 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:18.903 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:18.903 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:18.903 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:18.903 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:19.163 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:19.164 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:19.164 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:19.425 rmmod nvme_rdma 00:09:19.425 rmmod nvme_fabrics 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2774132 ']' 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2774132 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 2774132 ']' 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 2774132 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2774132 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2774132' 00:09:19.425 killing process with pid 2774132 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 2774132 00:09:19.425 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 2774132 00:09:19.686 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.686 10:16:56 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:19.686 00:09:19.686 real 0m10.579s 00:09:19.686 user 0m12.694s 00:09:19.686 sys 0m6.354s 00:09:19.686 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.686 10:16:56 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:19.686 ************************************ 00:09:19.686 END TEST nvmf_referrals 00:09:19.686 ************************************ 00:09:19.686 10:16:56 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:19.686 10:16:56 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:19.686 10:16:56 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:19.686 10:16:56 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.686 10:16:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:19.686 ************************************ 00:09:19.686 START TEST nvmf_connect_disconnect 00:09:19.686 ************************************ 00:09:19.686 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:19.946 * Looking for test storage... 00:09:19.946 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.946 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:19.947 10:16:56 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:09:28.095 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:09:28.095 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:09:28.095 Found net devices under 0000:98:00.0: mlx_0_0 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:09:28.095 Found net devices under 0000:98:00.1: mlx_0_1 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:09:28.095 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:28.096 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:28.096 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:09:28.096 altname enp152s0f0np0 00:09:28.096 altname ens817f0np0 00:09:28.096 inet 192.168.100.8/24 scope global mlx_0_0 00:09:28.096 valid_lft forever preferred_lft forever 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:28.096 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:28.096 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:09:28.096 altname enp152s0f1np1 00:09:28.096 altname ens817f1np1 00:09:28.096 inet 192.168.100.9/24 scope global mlx_0_1 00:09:28.096 valid_lft forever preferred_lft forever 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:28.096 192.168.100.9' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:28.096 192.168.100.9' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:28.096 192.168.100.9' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2778911 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2778911 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 2778911 ']' 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.096 10:17:04 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.096 [2024-07-15 10:17:04.821287] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:28.096 [2024-07-15 10:17:04.821349] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.096 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.096 [2024-07-15 10:17:04.891291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.096 [2024-07-15 10:17:04.966004] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.097 [2024-07-15 10:17:04.966044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.097 [2024-07-15 10:17:04.966053] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.097 [2024-07-15 10:17:04.966064] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.097 [2024-07-15 10:17:04.966069] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.097 [2024-07-15 10:17:04.966205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.097 [2024-07-15 10:17:04.966321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.097 [2024-07-15 10:17:04.966428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.097 [2024-07-15 10:17:04.966429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.713 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.713 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:28.713 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.713 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.713 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.713 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.713 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:28.713 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.713 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.713 [2024-07-15 10:17:05.652939] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:28.713 [2024-07-15 10:17:05.684120] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bf9200/0x1bfd6f0) succeed. 00:09:28.713 [2024-07-15 10:17:05.698564] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bfa840/0x1c3ed80) succeed. 00:09:28.713 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:28.714 [2024-07-15 10:17:05.856517] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:28.714 10:17:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:34.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.977 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:52.978 rmmod nvme_rdma 00:09:52.978 rmmod nvme_fabrics 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2778911 ']' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2778911 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2778911 ']' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 2778911 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2778911 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2778911' 00:09:52.978 killing process with pid 2778911 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 2778911 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 2778911 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:52.978 00:09:52.978 real 0m32.820s 00:09:52.978 user 1m40.410s 00:09:52.978 sys 0m6.712s 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.978 10:17:29 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:52.978 ************************************ 00:09:52.978 END TEST nvmf_connect_disconnect 00:09:52.978 ************************************ 00:09:52.978 10:17:29 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:09:52.978 10:17:29 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:52.978 10:17:29 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:52.978 10:17:29 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.978 10:17:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:52.978 ************************************ 00:09:52.978 START TEST nvmf_multitarget 00:09:52.978 ************************************ 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:09:52.978 * Looking for test storage... 00:09:52.978 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.978 10:17:29 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.115 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:10:01.116 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:10:01.116 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:10:01.116 Found net devices under 0000:98:00.0: mlx_0_0 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:10:01.116 Found net devices under 0000:98:00.1: mlx_0_1 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:01.116 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:01.116 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:10:01.116 altname enp152s0f0np0 00:10:01.116 altname ens817f0np0 00:10:01.116 inet 192.168.100.8/24 scope global mlx_0_0 00:10:01.116 valid_lft forever preferred_lft forever 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:01.116 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:01.116 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:10:01.116 altname enp152s0f1np1 00:10:01.116 altname ens817f1np1 00:10:01.116 inet 192.168.100.9/24 scope global mlx_0_1 00:10:01.116 valid_lft forever preferred_lft forever 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:01.116 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:01.117 192.168.100.9' 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:01.117 192.168.100.9' 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:01.117 192.168.100.9' 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2788177 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2788177 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 2788177 ']' 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:01.117 10:17:37 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:01.117 [2024-07-15 10:17:37.963912] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:01.117 [2024-07-15 10:17:37.963981] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.117 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.117 [2024-07-15 10:17:38.038620] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.117 [2024-07-15 10:17:38.114911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.117 [2024-07-15 10:17:38.114955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.117 [2024-07-15 10:17:38.114963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.117 [2024-07-15 10:17:38.114969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.117 [2024-07-15 10:17:38.114975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.117 [2024-07-15 10:17:38.115116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.117 [2024-07-15 10:17:38.115257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.117 [2024-07-15 10:17:38.115351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.117 [2024-07-15 10:17:38.115351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.687 10:17:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.687 10:17:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:10:01.687 10:17:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.687 10:17:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:01.687 10:17:38 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:01.687 10:17:38 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.687 10:17:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:01.687 10:17:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:01.687 10:17:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:01.947 10:17:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:01.947 10:17:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:01.947 "nvmf_tgt_1" 00:10:01.947 10:17:38 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:01.947 "nvmf_tgt_2" 00:10:01.947 10:17:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:01.947 10:17:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:02.207 10:17:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:02.207 10:17:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:02.207 true 00:10:02.207 10:17:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:02.207 true 00:10:02.207 10:17:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:02.207 10:17:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:02.467 10:17:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:02.467 10:17:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:02.467 10:17:39 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:02.467 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:02.467 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:02.468 rmmod nvme_rdma 00:10:02.468 rmmod nvme_fabrics 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2788177 ']' 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2788177 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 2788177 ']' 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 2788177 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2788177 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2788177' 00:10:02.468 killing process with pid 2788177 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 2788177 00:10:02.468 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 2788177 00:10:02.728 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:02.728 10:17:39 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:02.728 00:10:02.728 real 0m10.002s 00:10:02.728 user 0m9.586s 00:10:02.728 sys 0m6.353s 00:10:02.728 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.728 10:17:39 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:02.728 ************************************ 00:10:02.728 END TEST nvmf_multitarget 00:10:02.728 ************************************ 00:10:02.728 10:17:39 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:10:02.728 10:17:39 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:02.728 10:17:39 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:02.728 10:17:39 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.728 10:17:39 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:02.728 ************************************ 00:10:02.728 START TEST nvmf_rpc 00:10:02.728 ************************************ 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:02.728 * Looking for test storage... 00:10:02.728 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.728 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:02.990 10:17:39 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:10:11.119 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:10:11.119 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:10:11.119 Found net devices under 0000:98:00.0: mlx_0_0 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:10:11.119 Found net devices under 0000:98:00.1: mlx_0_1 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.119 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:11.120 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:11.120 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:10:11.120 altname enp152s0f0np0 00:10:11.120 altname ens817f0np0 00:10:11.120 inet 192.168.100.8/24 scope global mlx_0_0 00:10:11.120 valid_lft forever preferred_lft forever 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:11.120 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:11.120 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:10:11.120 altname enp152s0f1np1 00:10:11.120 altname ens817f1np1 00:10:11.120 inet 192.168.100.9/24 scope global mlx_0_1 00:10:11.120 valid_lft forever preferred_lft forever 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:11.120 192.168.100.9' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:11.120 192.168.100.9' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:11.120 192.168.100.9' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2792781 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2792781 00:10:11.120 10:17:47 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.120 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 2792781 ']' 00:10:11.120 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.120 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.120 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.120 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.120 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.120 [2024-07-15 10:17:48.057948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:11.120 [2024-07-15 10:17:48.058009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.120 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.120 [2024-07-15 10:17:48.129721] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.120 [2024-07-15 10:17:48.206250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.120 [2024-07-15 10:17:48.206291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.120 [2024-07-15 10:17:48.206299] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.120 [2024-07-15 10:17:48.206305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.120 [2024-07-15 10:17:48.206311] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.120 [2024-07-15 10:17:48.206381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.120 [2024-07-15 10:17:48.206517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.120 [2024-07-15 10:17:48.206672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.120 [2024-07-15 10:17:48.206673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.690 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.690 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:11.690 10:17:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.690 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:11.690 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.690 10:17:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.690 10:17:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:11.690 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.690 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.950 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.950 10:17:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:11.950 "tick_rate": 2400000000, 00:10:11.950 "poll_groups": [ 00:10:11.950 { 00:10:11.950 "name": "nvmf_tgt_poll_group_000", 00:10:11.950 "admin_qpairs": 0, 00:10:11.950 "io_qpairs": 0, 00:10:11.950 "current_admin_qpairs": 0, 00:10:11.950 "current_io_qpairs": 0, 00:10:11.950 "pending_bdev_io": 0, 00:10:11.950 "completed_nvme_io": 0, 00:10:11.950 "transports": [] 00:10:11.950 }, 00:10:11.950 { 00:10:11.950 "name": "nvmf_tgt_poll_group_001", 00:10:11.950 "admin_qpairs": 0, 00:10:11.950 "io_qpairs": 0, 00:10:11.950 "current_admin_qpairs": 0, 00:10:11.950 "current_io_qpairs": 0, 00:10:11.950 "pending_bdev_io": 0, 00:10:11.950 "completed_nvme_io": 0, 00:10:11.950 "transports": [] 00:10:11.950 }, 00:10:11.950 { 00:10:11.950 "name": "nvmf_tgt_poll_group_002", 00:10:11.950 "admin_qpairs": 0, 00:10:11.950 "io_qpairs": 0, 00:10:11.950 "current_admin_qpairs": 0, 00:10:11.950 "current_io_qpairs": 0, 00:10:11.950 "pending_bdev_io": 0, 00:10:11.950 "completed_nvme_io": 0, 00:10:11.950 "transports": [] 00:10:11.950 }, 00:10:11.950 { 00:10:11.950 "name": "nvmf_tgt_poll_group_003", 00:10:11.950 "admin_qpairs": 0, 00:10:11.950 "io_qpairs": 0, 00:10:11.950 "current_admin_qpairs": 0, 00:10:11.950 "current_io_qpairs": 0, 00:10:11.950 "pending_bdev_io": 0, 00:10:11.950 "completed_nvme_io": 0, 00:10:11.950 "transports": [] 00:10:11.950 } 00:10:11.950 ] 00:10:11.950 }' 00:10:11.950 10:17:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:11.950 10:17:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:11.950 10:17:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:11.950 10:17:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:11.951 10:17:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:11.951 10:17:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:11.951 10:17:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:11.951 10:17:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:11.951 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.951 10:17:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.951 [2024-07-15 10:17:49.024064] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x154c210/0x1550700) succeed. 00:10:11.951 [2024-07-15 10:17:49.038298] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x154d850/0x1591d90) succeed. 00:10:12.211 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.211 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:12.211 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.211 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.211 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.211 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:12.211 "tick_rate": 2400000000, 00:10:12.211 "poll_groups": [ 00:10:12.211 { 00:10:12.211 "name": "nvmf_tgt_poll_group_000", 00:10:12.211 "admin_qpairs": 0, 00:10:12.211 "io_qpairs": 0, 00:10:12.211 "current_admin_qpairs": 0, 00:10:12.211 "current_io_qpairs": 0, 00:10:12.211 "pending_bdev_io": 0, 00:10:12.211 "completed_nvme_io": 0, 00:10:12.211 "transports": [ 00:10:12.211 { 00:10:12.211 "trtype": "RDMA", 00:10:12.211 "pending_data_buffer": 0, 00:10:12.211 "devices": [ 00:10:12.211 { 00:10:12.211 "name": "mlx5_0", 00:10:12.211 "polls": 15906, 00:10:12.211 "idle_polls": 15906, 00:10:12.211 "completions": 0, 00:10:12.211 "requests": 0, 00:10:12.211 "request_latency": 0, 00:10:12.211 "pending_free_request": 0, 00:10:12.211 "pending_rdma_read": 0, 00:10:12.211 "pending_rdma_write": 0, 00:10:12.211 "pending_rdma_send": 0, 00:10:12.211 "total_send_wrs": 0, 00:10:12.211 "send_doorbell_updates": 0, 00:10:12.211 "total_recv_wrs": 4096, 00:10:12.211 "recv_doorbell_updates": 1 00:10:12.211 }, 00:10:12.211 { 00:10:12.211 "name": "mlx5_1", 00:10:12.211 "polls": 15906, 00:10:12.211 "idle_polls": 15906, 00:10:12.211 "completions": 0, 00:10:12.211 "requests": 0, 00:10:12.211 "request_latency": 0, 00:10:12.211 "pending_free_request": 0, 00:10:12.211 "pending_rdma_read": 0, 00:10:12.211 "pending_rdma_write": 0, 00:10:12.211 "pending_rdma_send": 0, 00:10:12.211 "total_send_wrs": 0, 00:10:12.211 "send_doorbell_updates": 0, 00:10:12.211 "total_recv_wrs": 4096, 00:10:12.211 "recv_doorbell_updates": 1 00:10:12.211 } 00:10:12.211 ] 00:10:12.211 } 00:10:12.211 ] 00:10:12.211 }, 00:10:12.211 { 00:10:12.211 "name": "nvmf_tgt_poll_group_001", 00:10:12.211 "admin_qpairs": 0, 00:10:12.211 "io_qpairs": 0, 00:10:12.211 "current_admin_qpairs": 0, 00:10:12.211 "current_io_qpairs": 0, 00:10:12.211 "pending_bdev_io": 0, 00:10:12.211 "completed_nvme_io": 0, 00:10:12.211 "transports": [ 00:10:12.211 { 00:10:12.211 "trtype": "RDMA", 00:10:12.211 "pending_data_buffer": 0, 00:10:12.211 "devices": [ 00:10:12.211 { 00:10:12.211 "name": "mlx5_0", 00:10:12.211 "polls": 15943, 00:10:12.211 "idle_polls": 15943, 00:10:12.211 "completions": 0, 00:10:12.211 "requests": 0, 00:10:12.211 "request_latency": 0, 00:10:12.211 "pending_free_request": 0, 00:10:12.211 "pending_rdma_read": 0, 00:10:12.211 "pending_rdma_write": 0, 00:10:12.211 "pending_rdma_send": 0, 00:10:12.211 "total_send_wrs": 0, 00:10:12.211 "send_doorbell_updates": 0, 00:10:12.211 "total_recv_wrs": 4096, 00:10:12.211 "recv_doorbell_updates": 1 00:10:12.211 }, 00:10:12.211 { 00:10:12.211 "name": "mlx5_1", 00:10:12.211 "polls": 15943, 00:10:12.211 "idle_polls": 15943, 00:10:12.211 "completions": 0, 00:10:12.211 "requests": 0, 00:10:12.211 "request_latency": 0, 00:10:12.211 "pending_free_request": 0, 00:10:12.211 "pending_rdma_read": 0, 00:10:12.211 "pending_rdma_write": 0, 00:10:12.211 "pending_rdma_send": 0, 00:10:12.211 "total_send_wrs": 0, 00:10:12.211 "send_doorbell_updates": 0, 00:10:12.211 "total_recv_wrs": 4096, 00:10:12.211 "recv_doorbell_updates": 1 00:10:12.211 } 00:10:12.211 ] 00:10:12.211 } 00:10:12.211 ] 00:10:12.211 }, 00:10:12.211 { 00:10:12.211 "name": "nvmf_tgt_poll_group_002", 00:10:12.211 "admin_qpairs": 0, 00:10:12.211 "io_qpairs": 0, 00:10:12.211 "current_admin_qpairs": 0, 00:10:12.211 "current_io_qpairs": 0, 00:10:12.211 "pending_bdev_io": 0, 00:10:12.211 "completed_nvme_io": 0, 00:10:12.211 "transports": [ 00:10:12.211 { 00:10:12.211 "trtype": "RDMA", 00:10:12.211 "pending_data_buffer": 0, 00:10:12.211 "devices": [ 00:10:12.211 { 00:10:12.211 "name": "mlx5_0", 00:10:12.211 "polls": 5734, 00:10:12.211 "idle_polls": 5734, 00:10:12.211 "completions": 0, 00:10:12.211 "requests": 0, 00:10:12.211 "request_latency": 0, 00:10:12.211 "pending_free_request": 0, 00:10:12.211 "pending_rdma_read": 0, 00:10:12.211 "pending_rdma_write": 0, 00:10:12.212 "pending_rdma_send": 0, 00:10:12.212 "total_send_wrs": 0, 00:10:12.212 "send_doorbell_updates": 0, 00:10:12.212 "total_recv_wrs": 4096, 00:10:12.212 "recv_doorbell_updates": 1 00:10:12.212 }, 00:10:12.212 { 00:10:12.212 "name": "mlx5_1", 00:10:12.212 "polls": 5734, 00:10:12.212 "idle_polls": 5734, 00:10:12.212 "completions": 0, 00:10:12.212 "requests": 0, 00:10:12.212 "request_latency": 0, 00:10:12.212 "pending_free_request": 0, 00:10:12.212 "pending_rdma_read": 0, 00:10:12.212 "pending_rdma_write": 0, 00:10:12.212 "pending_rdma_send": 0, 00:10:12.212 "total_send_wrs": 0, 00:10:12.212 "send_doorbell_updates": 0, 00:10:12.212 "total_recv_wrs": 4096, 00:10:12.212 "recv_doorbell_updates": 1 00:10:12.212 } 00:10:12.212 ] 00:10:12.212 } 00:10:12.212 ] 00:10:12.212 }, 00:10:12.212 { 00:10:12.212 "name": "nvmf_tgt_poll_group_003", 00:10:12.212 "admin_qpairs": 0, 00:10:12.212 "io_qpairs": 0, 00:10:12.212 "current_admin_qpairs": 0, 00:10:12.212 "current_io_qpairs": 0, 00:10:12.212 "pending_bdev_io": 0, 00:10:12.212 "completed_nvme_io": 0, 00:10:12.212 "transports": [ 00:10:12.212 { 00:10:12.212 "trtype": "RDMA", 00:10:12.212 "pending_data_buffer": 0, 00:10:12.212 "devices": [ 00:10:12.212 { 00:10:12.212 "name": "mlx5_0", 00:10:12.212 "polls": 879, 00:10:12.212 "idle_polls": 879, 00:10:12.212 "completions": 0, 00:10:12.212 "requests": 0, 00:10:12.212 "request_latency": 0, 00:10:12.212 "pending_free_request": 0, 00:10:12.212 "pending_rdma_read": 0, 00:10:12.212 "pending_rdma_write": 0, 00:10:12.212 "pending_rdma_send": 0, 00:10:12.212 "total_send_wrs": 0, 00:10:12.212 "send_doorbell_updates": 0, 00:10:12.212 "total_recv_wrs": 4096, 00:10:12.212 "recv_doorbell_updates": 1 00:10:12.212 }, 00:10:12.212 { 00:10:12.212 "name": "mlx5_1", 00:10:12.212 "polls": 879, 00:10:12.212 "idle_polls": 879, 00:10:12.212 "completions": 0, 00:10:12.212 "requests": 0, 00:10:12.212 "request_latency": 0, 00:10:12.212 "pending_free_request": 0, 00:10:12.212 "pending_rdma_read": 0, 00:10:12.212 "pending_rdma_write": 0, 00:10:12.212 "pending_rdma_send": 0, 00:10:12.212 "total_send_wrs": 0, 00:10:12.212 "send_doorbell_updates": 0, 00:10:12.212 "total_recv_wrs": 4096, 00:10:12.212 "recv_doorbell_updates": 1 00:10:12.212 } 00:10:12.212 ] 00:10:12.212 } 00:10:12.212 ] 00:10:12.212 } 00:10:12.212 ] 00:10:12.212 }' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:10:12.212 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.473 Malloc1 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.473 [2024-07-15 10:17:49.486461] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -s 4420 00:10:12.473 [2024-07-15 10:17:49.542175] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:10:12.473 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:12.473 could not add new controller: failed to write to nvme-fabrics device 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.473 10:17:49 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:13.856 10:17:50 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:13.856 10:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:13.856 10:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.856 10:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:13.856 10:17:50 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:16.397 10:17:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:16.397 10:17:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:16.397 10:17:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.397 10:17:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:16.397 10:17:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.397 10:17:52 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:16.397 10:17:52 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:17.337 [2024-07-15 10:17:54.395464] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:10:17.337 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:17.337 could not add new controller: failed to write to nvme-fabrics device 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.337 10:17:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:18.720 10:17:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.720 10:17:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.720 10:17:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.720 10:17:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:18.720 10:17:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:20.768 10:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:20.768 10:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:20.768 10:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.768 10:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:20.768 10:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.768 10:17:57 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:20.768 10:17:57 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:22.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.148 10:17:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.149 [2024-07-15 10:17:59.246178] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.149 10:17:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:23.528 10:18:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.529 10:18:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:23.529 10:18:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.529 10:18:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:23.529 10:18:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:26.072 10:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:26.072 10:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:26.072 10:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.072 10:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:26.072 10:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.072 10:18:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:26.072 10:18:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.012 [2024-07-15 10:18:03.950269] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:27.012 10:18:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:28.396 10:18:05 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:28.396 10:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:28.396 10:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.396 10:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:28.396 10:18:05 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:30.308 10:18:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:30.308 10:18:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:30.308 10:18:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:30.308 10:18:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:30.308 10:18:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:30.308 10:18:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:30.308 10:18:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.691 [2024-07-15 10:18:08.797624] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.691 10:18:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:33.068 10:18:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:33.068 10:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:33.068 10:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.068 10:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:33.068 10:18:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:35.610 10:18:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:35.610 10:18:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:35.610 10:18:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.610 10:18:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:35.610 10:18:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.610 10:18:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:35.610 10:18:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:36.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:36.551 10:18:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.552 [2024-07-15 10:18:13.615703] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:36.552 10:18:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:37.935 10:18:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:37.935 10:18:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:37.935 10:18:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:37.935 10:18:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:37.935 10:18:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:40.477 10:18:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:40.477 10:18:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:40.477 10:18:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:40.477 10:18:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:40.477 10:18:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:40.477 10:18:17 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:40.477 10:18:17 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.416 10:18:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.416 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:41.416 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:41.416 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.416 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.417 [2024-07-15 10:18:18.461289] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.417 10:18:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:42.799 10:18:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:42.799 10:18:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:42.799 10:18:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.799 10:18:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:42.799 10:18:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:44.710 10:18:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:44.710 10:18:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:44.710 10:18:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.970 10:18:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:44.970 10:18:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.970 10:18:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:44.970 10:18:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.913 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.913 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:45.913 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:45.913 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.913 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:45.913 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.913 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:45.913 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:45.913 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.913 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.177 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 [2024-07-15 10:18:23.151203] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 [2024-07-15 10:18:23.211429] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 [2024-07-15 10:18:23.275630] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 [2024-07-15 10:18:23.331819] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.178 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.179 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.179 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.179 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.179 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.444 [2024-07-15 10:18:23.392016] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.444 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:46.444 "tick_rate": 2400000000, 00:10:46.444 "poll_groups": [ 00:10:46.444 { 00:10:46.444 "name": "nvmf_tgt_poll_group_000", 00:10:46.444 "admin_qpairs": 2, 00:10:46.444 "io_qpairs": 27, 00:10:46.444 "current_admin_qpairs": 0, 00:10:46.444 "current_io_qpairs": 0, 00:10:46.444 "pending_bdev_io": 0, 00:10:46.444 "completed_nvme_io": 81, 00:10:46.444 "transports": [ 00:10:46.444 { 00:10:46.444 "trtype": "RDMA", 00:10:46.444 "pending_data_buffer": 0, 00:10:46.444 "devices": [ 00:10:46.444 { 00:10:46.444 "name": "mlx5_0", 00:10:46.444 "polls": 5061614, 00:10:46.444 "idle_polls": 5061366, 00:10:46.444 "completions": 273, 00:10:46.444 "requests": 136, 00:10:46.444 "request_latency": 20088364, 00:10:46.444 "pending_free_request": 0, 00:10:46.444 "pending_rdma_read": 0, 00:10:46.444 "pending_rdma_write": 0, 00:10:46.444 "pending_rdma_send": 0, 00:10:46.444 "total_send_wrs": 217, 00:10:46.444 "send_doorbell_updates": 122, 00:10:46.444 "total_recv_wrs": 4232, 00:10:46.444 "recv_doorbell_updates": 122 00:10:46.444 }, 00:10:46.444 { 00:10:46.444 "name": "mlx5_1", 00:10:46.444 "polls": 5061614, 00:10:46.444 "idle_polls": 5061614, 00:10:46.444 "completions": 0, 00:10:46.444 "requests": 0, 00:10:46.444 "request_latency": 0, 00:10:46.444 "pending_free_request": 0, 00:10:46.444 "pending_rdma_read": 0, 00:10:46.444 "pending_rdma_write": 0, 00:10:46.444 "pending_rdma_send": 0, 00:10:46.444 "total_send_wrs": 0, 00:10:46.444 "send_doorbell_updates": 0, 00:10:46.444 "total_recv_wrs": 4096, 00:10:46.444 "recv_doorbell_updates": 1 00:10:46.444 } 00:10:46.444 ] 00:10:46.444 } 00:10:46.444 ] 00:10:46.444 }, 00:10:46.444 { 00:10:46.444 "name": "nvmf_tgt_poll_group_001", 00:10:46.444 "admin_qpairs": 2, 00:10:46.444 "io_qpairs": 26, 00:10:46.444 "current_admin_qpairs": 0, 00:10:46.444 "current_io_qpairs": 0, 00:10:46.444 "pending_bdev_io": 0, 00:10:46.444 "completed_nvme_io": 127, 00:10:46.444 "transports": [ 00:10:46.444 { 00:10:46.444 "trtype": "RDMA", 00:10:46.444 "pending_data_buffer": 0, 00:10:46.444 "devices": [ 00:10:46.444 { 00:10:46.444 "name": "mlx5_0", 00:10:46.444 "polls": 5041878, 00:10:46.444 "idle_polls": 5041558, 00:10:46.444 "completions": 360, 00:10:46.444 "requests": 180, 00:10:46.444 "request_latency": 29831100, 00:10:46.444 "pending_free_request": 0, 00:10:46.444 "pending_rdma_read": 0, 00:10:46.444 "pending_rdma_write": 0, 00:10:46.444 "pending_rdma_send": 0, 00:10:46.444 "total_send_wrs": 306, 00:10:46.444 "send_doorbell_updates": 155, 00:10:46.444 "total_recv_wrs": 4276, 00:10:46.444 "recv_doorbell_updates": 156 00:10:46.444 }, 00:10:46.444 { 00:10:46.444 "name": "mlx5_1", 00:10:46.444 "polls": 5041878, 00:10:46.444 "idle_polls": 5041878, 00:10:46.444 "completions": 0, 00:10:46.444 "requests": 0, 00:10:46.444 "request_latency": 0, 00:10:46.444 "pending_free_request": 0, 00:10:46.444 "pending_rdma_read": 0, 00:10:46.444 "pending_rdma_write": 0, 00:10:46.444 "pending_rdma_send": 0, 00:10:46.444 "total_send_wrs": 0, 00:10:46.444 "send_doorbell_updates": 0, 00:10:46.444 "total_recv_wrs": 4096, 00:10:46.444 "recv_doorbell_updates": 1 00:10:46.444 } 00:10:46.444 ] 00:10:46.444 } 00:10:46.444 ] 00:10:46.444 }, 00:10:46.444 { 00:10:46.444 "name": "nvmf_tgt_poll_group_002", 00:10:46.444 "admin_qpairs": 1, 00:10:46.444 "io_qpairs": 26, 00:10:46.444 "current_admin_qpairs": 0, 00:10:46.444 "current_io_qpairs": 0, 00:10:46.444 "pending_bdev_io": 0, 00:10:46.444 "completed_nvme_io": 119, 00:10:46.444 "transports": [ 00:10:46.444 { 00:10:46.444 "trtype": "RDMA", 00:10:46.444 "pending_data_buffer": 0, 00:10:46.444 "devices": [ 00:10:46.444 { 00:10:46.444 "name": "mlx5_0", 00:10:46.444 "polls": 5062998, 00:10:46.444 "idle_polls": 5062740, 00:10:46.444 "completions": 293, 00:10:46.444 "requests": 146, 00:10:46.444 "request_latency": 25572922, 00:10:46.444 "pending_free_request": 0, 00:10:46.444 "pending_rdma_read": 0, 00:10:46.444 "pending_rdma_write": 0, 00:10:46.444 "pending_rdma_send": 0, 00:10:46.444 "total_send_wrs": 252, 00:10:46.444 "send_doorbell_updates": 126, 00:10:46.444 "total_recv_wrs": 4242, 00:10:46.444 "recv_doorbell_updates": 126 00:10:46.444 }, 00:10:46.444 { 00:10:46.444 "name": "mlx5_1", 00:10:46.444 "polls": 5062998, 00:10:46.444 "idle_polls": 5062998, 00:10:46.444 "completions": 0, 00:10:46.444 "requests": 0, 00:10:46.444 "request_latency": 0, 00:10:46.444 "pending_free_request": 0, 00:10:46.444 "pending_rdma_read": 0, 00:10:46.444 "pending_rdma_write": 0, 00:10:46.444 "pending_rdma_send": 0, 00:10:46.444 "total_send_wrs": 0, 00:10:46.445 "send_doorbell_updates": 0, 00:10:46.445 "total_recv_wrs": 4096, 00:10:46.445 "recv_doorbell_updates": 1 00:10:46.445 } 00:10:46.445 ] 00:10:46.445 } 00:10:46.445 ] 00:10:46.445 }, 00:10:46.445 { 00:10:46.445 "name": "nvmf_tgt_poll_group_003", 00:10:46.445 "admin_qpairs": 2, 00:10:46.445 "io_qpairs": 26, 00:10:46.445 "current_admin_qpairs": 0, 00:10:46.445 "current_io_qpairs": 0, 00:10:46.445 "pending_bdev_io": 0, 00:10:46.445 "completed_nvme_io": 128, 00:10:46.445 "transports": [ 00:10:46.445 { 00:10:46.445 "trtype": "RDMA", 00:10:46.445 "pending_data_buffer": 0, 00:10:46.445 "devices": [ 00:10:46.445 { 00:10:46.445 "name": "mlx5_0", 00:10:46.445 "polls": 3457277, 00:10:46.445 "idle_polls": 3456957, 00:10:46.445 "completions": 364, 00:10:46.445 "requests": 182, 00:10:46.445 "request_latency": 40901980, 00:10:46.445 "pending_free_request": 0, 00:10:46.445 "pending_rdma_read": 0, 00:10:46.445 "pending_rdma_write": 0, 00:10:46.445 "pending_rdma_send": 0, 00:10:46.445 "total_send_wrs": 310, 00:10:46.445 "send_doorbell_updates": 156, 00:10:46.445 "total_recv_wrs": 4278, 00:10:46.445 "recv_doorbell_updates": 157 00:10:46.445 }, 00:10:46.445 { 00:10:46.445 "name": "mlx5_1", 00:10:46.445 "polls": 3457277, 00:10:46.445 "idle_polls": 3457277, 00:10:46.445 "completions": 0, 00:10:46.445 "requests": 0, 00:10:46.445 "request_latency": 0, 00:10:46.445 "pending_free_request": 0, 00:10:46.445 "pending_rdma_read": 0, 00:10:46.445 "pending_rdma_write": 0, 00:10:46.445 "pending_rdma_send": 0, 00:10:46.445 "total_send_wrs": 0, 00:10:46.445 "send_doorbell_updates": 0, 00:10:46.445 "total_recv_wrs": 4096, 00:10:46.445 "recv_doorbell_updates": 1 00:10:46.445 } 00:10:46.445 ] 00:10:46.445 } 00:10:46.445 ] 00:10:46.445 } 00:10:46.445 ] 00:10:46.445 }' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:10:46.445 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 116394366 > 0 )) 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:46.705 rmmod nvme_rdma 00:10:46.705 rmmod nvme_fabrics 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2792781 ']' 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2792781 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 2792781 ']' 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 2792781 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2792781 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2792781' 00:10:46.705 killing process with pid 2792781 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 2792781 00:10:46.705 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 2792781 00:10:46.966 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.967 10:18:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:46.967 00:10:46.967 real 0m44.197s 00:10:46.967 user 2m25.332s 00:10:46.967 sys 0m7.416s 00:10:46.967 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.967 10:18:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.967 ************************************ 00:10:46.967 END TEST nvmf_rpc 00:10:46.967 ************************************ 00:10:46.967 10:18:24 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:10:46.967 10:18:24 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:10:46.967 10:18:24 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:46.967 10:18:24 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.967 10:18:24 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:46.967 ************************************ 00:10:46.967 START TEST nvmf_invalid 00:10:46.967 ************************************ 00:10:46.967 10:18:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:10:47.229 * Looking for test storage... 00:10:47.229 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:47.229 10:18:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.368 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:10:55.369 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:10:55.369 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:10:55.369 Found net devices under 0000:98:00.0: mlx_0_0 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:10:55.369 Found net devices under 0000:98:00.1: mlx_0_1 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:55.369 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:55.369 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:10:55.369 altname enp152s0f0np0 00:10:55.369 altname ens817f0np0 00:10:55.369 inet 192.168.100.8/24 scope global mlx_0_0 00:10:55.369 valid_lft forever preferred_lft forever 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:55.369 10:18:31 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:55.369 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:55.369 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:10:55.369 altname enp152s0f1np1 00:10:55.369 altname ens817f1np1 00:10:55.369 inet 192.168.100.9/24 scope global mlx_0_1 00:10:55.369 valid_lft forever preferred_lft forever 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:55.369 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:55.370 192.168.100.9' 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:55.370 192.168.100.9' 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:55.370 192.168.100.9' 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2804901 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2804901 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 2804901 ']' 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.370 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:55.370 [2024-07-15 10:18:32.175758] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:55.370 [2024-07-15 10:18:32.175812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.370 EAL: No free 2048 kB hugepages reported on node 1 00:10:55.370 [2024-07-15 10:18:32.243647] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.370 [2024-07-15 10:18:32.309300] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.370 [2024-07-15 10:18:32.309339] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.370 [2024-07-15 10:18:32.309347] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.370 [2024-07-15 10:18:32.309353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.370 [2024-07-15 10:18:32.309359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.370 [2024-07-15 10:18:32.309528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.370 [2024-07-15 10:18:32.309652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.370 [2024-07-15 10:18:32.309807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.370 [2024-07-15 10:18:32.309807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.943 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:55.943 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:55.943 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:55.943 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:55.943 10:18:32 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:55.943 10:18:32 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.943 10:18:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:55.943 10:18:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17083 00:10:55.943 [2024-07-15 10:18:33.130277] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:56.207 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:56.207 { 00:10:56.207 "nqn": "nqn.2016-06.io.spdk:cnode17083", 00:10:56.208 "tgt_name": "foobar", 00:10:56.208 "method": "nvmf_create_subsystem", 00:10:56.208 "req_id": 1 00:10:56.208 } 00:10:56.208 Got JSON-RPC error response 00:10:56.208 response: 00:10:56.208 { 00:10:56.208 "code": -32603, 00:10:56.208 "message": "Unable to find target foobar" 00:10:56.208 }' 00:10:56.208 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:56.208 { 00:10:56.208 "nqn": "nqn.2016-06.io.spdk:cnode17083", 00:10:56.208 "tgt_name": "foobar", 00:10:56.208 "method": "nvmf_create_subsystem", 00:10:56.208 "req_id": 1 00:10:56.208 } 00:10:56.208 Got JSON-RPC error response 00:10:56.208 response: 00:10:56.208 { 00:10:56.208 "code": -32603, 00:10:56.208 "message": "Unable to find target foobar" 00:10:56.208 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:56.208 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:56.208 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14766 00:10:56.208 [2024-07-15 10:18:33.302858] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14766: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:56.208 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:56.208 { 00:10:56.208 "nqn": "nqn.2016-06.io.spdk:cnode14766", 00:10:56.208 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:56.208 "method": "nvmf_create_subsystem", 00:10:56.208 "req_id": 1 00:10:56.208 } 00:10:56.208 Got JSON-RPC error response 00:10:56.208 response: 00:10:56.208 { 00:10:56.208 "code": -32602, 00:10:56.208 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:56.208 }' 00:10:56.208 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:56.208 { 00:10:56.208 "nqn": "nqn.2016-06.io.spdk:cnode14766", 00:10:56.208 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:56.208 "method": "nvmf_create_subsystem", 00:10:56.208 "req_id": 1 00:10:56.208 } 00:10:56.208 Got JSON-RPC error response 00:10:56.208 response: 00:10:56.208 { 00:10:56.208 "code": -32602, 00:10:56.208 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:56.208 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:56.208 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:56.208 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12694 00:10:56.468 [2024-07-15 10:18:33.483487] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12694: invalid model number 'SPDK_Controller' 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:56.468 { 00:10:56.468 "nqn": "nqn.2016-06.io.spdk:cnode12694", 00:10:56.468 "model_number": "SPDK_Controller\u001f", 00:10:56.468 "method": "nvmf_create_subsystem", 00:10:56.468 "req_id": 1 00:10:56.468 } 00:10:56.468 Got JSON-RPC error response 00:10:56.468 response: 00:10:56.468 { 00:10:56.468 "code": -32602, 00:10:56.468 "message": "Invalid MN SPDK_Controller\u001f" 00:10:56.468 }' 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:56.468 { 00:10:56.468 "nqn": "nqn.2016-06.io.spdk:cnode12694", 00:10:56.468 "model_number": "SPDK_Controller\u001f", 00:10:56.468 "method": "nvmf_create_subsystem", 00:10:56.468 "req_id": 1 00:10:56.468 } 00:10:56.468 Got JSON-RPC error response 00:10:56.468 response: 00:10:56.468 { 00:10:56.468 "code": -32602, 00:10:56.468 "message": "Invalid MN SPDK_Controller\u001f" 00:10:56.468 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.468 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.469 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo '>tri.fdhBEm:1ckF0t$P@' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '>tri.fdhBEm:1ckF0t$P@' nqn.2016-06.io.spdk:cnode31775 00:10:56.730 [2024-07-15 10:18:33.816519] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31775: invalid serial number '>tri.fdhBEm:1ckF0t$P@' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:56.730 { 00:10:56.730 "nqn": "nqn.2016-06.io.spdk:cnode31775", 00:10:56.730 "serial_number": ">tri.fdhBEm:1ckF0t$P@", 00:10:56.730 "method": "nvmf_create_subsystem", 00:10:56.730 "req_id": 1 00:10:56.730 } 00:10:56.730 Got JSON-RPC error response 00:10:56.730 response: 00:10:56.730 { 00:10:56.730 "code": -32602, 00:10:56.730 "message": "Invalid SN >tri.fdhBEm:1ckF0t$P@" 00:10:56.730 }' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:56.730 { 00:10:56.730 "nqn": "nqn.2016-06.io.spdk:cnode31775", 00:10:56.730 "serial_number": ">tri.fdhBEm:1ckF0t$P@", 00:10:56.730 "method": "nvmf_create_subsystem", 00:10:56.730 "req_id": 1 00:10:56.730 } 00:10:56.730 Got JSON-RPC error response 00:10:56.730 response: 00:10:56.730 { 00:10:56.730 "code": -32602, 00:10:56.730 "message": "Invalid SN >tri.fdhBEm:1ckF0t$P@" 00:10:56.730 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:56.730 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.731 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:56.993 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ Z == \- ]] 00:10:56.994 10:18:34 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Z.\|F\ /dev/null' 00:10:59.460 10:18:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.460 10:18:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:59.460 10:18:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:59.460 10:18:36 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:59.460 10:18:36 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:11:07.619 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:11:07.619 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:11:07.619 Found net devices under 0000:98:00.0: mlx_0_0 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:11:07.619 Found net devices under 0000:98:00.1: mlx_0_1 00:11:07.619 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:07.620 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:07.620 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:11:07.620 altname enp152s0f0np0 00:11:07.620 altname ens817f0np0 00:11:07.620 inet 192.168.100.8/24 scope global mlx_0_0 00:11:07.620 valid_lft forever preferred_lft forever 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:07.620 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:07.620 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:11:07.620 altname enp152s0f1np1 00:11:07.620 altname ens817f1np1 00:11:07.620 inet 192.168.100.9/24 scope global mlx_0_1 00:11:07.620 valid_lft forever preferred_lft forever 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:07.620 192.168.100.9' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:07.620 192.168.100.9' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:07.620 192.168.100.9' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2810098 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2810098 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 2810098 ']' 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:07.620 10:18:44 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:07.620 [2024-07-15 10:18:44.535469] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:07.620 [2024-07-15 10:18:44.535534] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.620 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.620 [2024-07-15 10:18:44.622748] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:07.620 [2024-07-15 10:18:44.718557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.620 [2024-07-15 10:18:44.718618] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.620 [2024-07-15 10:18:44.718626] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.620 [2024-07-15 10:18:44.718632] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.620 [2024-07-15 10:18:44.718639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.620 [2024-07-15 10:18:44.718774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.620 [2024-07-15 10:18:44.718940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.620 [2024-07-15 10:18:44.718941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.192 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.192 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:11:08.192 10:18:45 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.192 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:08.192 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.192 10:18:45 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.192 10:18:45 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:11:08.192 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.192 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.479 [2024-07-15 10:18:45.402529] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1995920/0x1999e10) succeed. 00:11:08.479 [2024-07-15 10:18:45.415655] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1996ec0/0x19db4a0) succeed. 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.479 Malloc0 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.479 Delay0 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.479 [2024-07-15 10:18:45.587876] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.479 10:18:45 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:08.479 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.739 [2024-07-15 10:18:45.686931] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:10.652 Initializing NVMe Controllers 00:11:10.652 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:11:10.652 controller IO queue size 128 less than required 00:11:10.652 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:10.652 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:10.652 Initialization complete. Launching workers. 00:11:10.652 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37646 00:11:10.652 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37707, failed to submit 62 00:11:10.652 success 37647, unsuccess 60, failed 0 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.652 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:10.652 rmmod nvme_rdma 00:11:10.913 rmmod nvme_fabrics 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2810098 ']' 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2810098 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 2810098 ']' 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 2810098 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2810098 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2810098' 00:11:10.913 killing process with pid 2810098 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@967 -- # kill 2810098 00:11:10.913 10:18:47 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@972 -- # wait 2810098 00:11:11.174 10:18:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:11.174 10:18:48 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:11.174 00:11:11.174 real 0m11.775s 00:11:11.174 user 0m14.813s 00:11:11.174 sys 0m6.286s 00:11:11.174 10:18:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.174 10:18:48 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:11.174 ************************************ 00:11:11.174 END TEST nvmf_abort 00:11:11.174 ************************************ 00:11:11.174 10:18:48 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:11.174 10:18:48 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:11:11.174 10:18:48 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:11.174 10:18:48 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.174 10:18:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:11.174 ************************************ 00:11:11.174 START TEST nvmf_ns_hotplug_stress 00:11:11.174 ************************************ 00:11:11.174 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:11:11.174 * Looking for test storage... 00:11:11.174 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:11.174 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.174 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:11.174 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.174 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.174 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:11.175 10:18:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:11:19.317 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:11:19.317 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:11:19.317 Found net devices under 0000:98:00.0: mlx_0_0 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:11:19.317 Found net devices under 0000:98:00.1: mlx_0_1 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:19.317 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:19.317 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:11:19.317 altname enp152s0f0np0 00:11:19.317 altname ens817f0np0 00:11:19.317 inet 192.168.100.8/24 scope global mlx_0_0 00:11:19.317 valid_lft forever preferred_lft forever 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:19.317 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:19.318 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:19.318 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:11:19.318 altname enp152s0f1np1 00:11:19.318 altname ens817f1np1 00:11:19.318 inet 192.168.100.9/24 scope global mlx_0_1 00:11:19.318 valid_lft forever preferred_lft forever 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:19.318 192.168.100.9' 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:19.318 192.168.100.9' 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:19.318 192.168.100.9' 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:19.318 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2815036 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2815036 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 2815036 ']' 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:19.580 10:18:56 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.580 [2024-07-15 10:18:56.594241] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:19.580 [2024-07-15 10:18:56.594313] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.580 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.580 [2024-07-15 10:18:56.682610] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.580 [2024-07-15 10:18:56.776580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.580 [2024-07-15 10:18:56.776646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.580 [2024-07-15 10:18:56.776654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.580 [2024-07-15 10:18:56.776661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.580 [2024-07-15 10:18:56.776666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.841 [2024-07-15 10:18:56.776801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.841 [2024-07-15 10:18:56.776969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.841 [2024-07-15 10:18:56.776970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.413 10:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:20.413 10:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:11:20.413 10:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:20.413 10:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:20.413 10:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.413 10:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.413 10:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:20.413 10:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:20.413 [2024-07-15 10:18:57.595121] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cb8920/0x1cbce10) succeed. 00:11:20.674 [2024-07-15 10:18:57.610967] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cb9ec0/0x1cfe4a0) succeed. 00:11:20.674 10:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:20.935 10:18:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:20.935 [2024-07-15 10:18:58.040942] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:20.936 10:18:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:21.197 10:18:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:21.458 Malloc0 00:11:21.458 10:18:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:21.458 Delay0 00:11:21.458 10:18:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.718 10:18:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:21.718 NULL1 00:11:21.980 10:18:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:21.980 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2815500 00:11:21.980 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:21.980 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:21.980 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.980 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.241 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.241 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:22.241 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:22.501 [2024-07-15 10:18:59.556834] bdev.c:5033:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:11:22.502 true 00:11:22.502 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:22.502 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.762 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.762 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:22.762 10:18:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:23.025 true 00:11:23.025 10:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:23.025 10:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.286 10:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.286 10:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:23.286 10:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:23.547 true 00:11:23.547 10:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:23.547 10:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.806 10:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.806 10:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:23.806 10:19:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:24.066 true 00:11:24.066 10:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:24.067 10:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.067 10:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.327 10:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:24.327 10:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:24.327 true 00:11:24.588 10:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:24.589 10:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.589 10:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.848 10:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:24.848 10:19:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:24.848 true 00:11:24.848 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:24.848 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.109 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.370 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:25.370 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:25.370 true 00:11:25.370 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:25.370 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.630 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.630 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:25.630 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:25.890 true 00:11:25.890 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:25.890 10:19:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.150 10:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.150 10:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:26.150 10:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:26.411 true 00:11:26.411 10:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:26.411 10:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.671 10:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.671 10:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:26.671 10:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:26.931 true 00:11:26.931 10:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:26.931 10:19:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.931 10:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.191 10:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:27.191 10:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:27.191 true 00:11:27.451 10:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:27.451 10:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.451 10:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.712 10:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:27.712 10:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:27.712 true 00:11:27.712 10:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:27.712 10:19:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.972 10:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.234 10:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:28.235 10:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:28.235 true 00:11:28.235 10:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:28.235 10:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.496 10:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.756 10:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:28.756 10:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:28.756 true 00:11:28.756 10:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:28.756 10:19:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.016 10:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.016 10:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:29.016 10:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:29.277 true 00:11:29.277 10:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:29.277 10:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.538 10:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.538 10:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:29.538 10:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:29.800 true 00:11:29.800 10:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:29.800 10:19:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.061 10:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.061 10:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:30.061 10:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:30.322 true 00:11:30.322 10:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:30.322 10:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.322 10:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.583 10:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:30.583 10:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:30.844 true 00:11:30.844 10:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:30.844 10:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.844 10:19:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.105 10:19:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:31.105 10:19:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:31.367 true 00:11:31.367 10:19:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:31.367 10:19:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.367 10:19:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.628 10:19:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:31.628 10:19:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:31.628 true 00:11:31.889 10:19:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:31.889 10:19:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.889 10:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.150 10:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:32.150 10:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:32.150 true 00:11:32.150 10:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:32.150 10:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.411 10:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.672 10:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:32.672 10:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:32.672 true 00:11:32.672 10:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:32.672 10:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.932 10:19:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.192 10:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:33.192 10:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:33.192 true 00:11:33.192 10:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:33.192 10:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.453 10:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.714 10:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:33.714 10:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:33.714 true 00:11:33.714 10:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:33.714 10:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.975 10:19:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.975 10:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:33.975 10:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:34.234 true 00:11:34.234 10:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:34.234 10:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.493 10:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.493 10:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:34.493 10:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:34.795 true 00:11:34.795 10:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:34.795 10:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.110 10:19:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.110 10:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:35.110 10:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:35.110 true 00:11:35.377 10:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:35.377 10:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.377 10:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.638 10:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:35.638 10:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:35.638 true 00:11:35.638 10:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:35.638 10:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.899 10:19:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.164 10:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:36.164 10:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:36.164 true 00:11:36.164 10:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:36.164 10:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.520 10:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.520 10:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:36.520 10:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:36.780 true 00:11:36.780 10:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:36.780 10:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.780 10:19:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.040 10:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:37.040 10:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:37.300 true 00:11:37.300 10:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:37.300 10:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.300 10:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.560 10:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:37.560 10:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:37.560 true 00:11:37.819 10:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:37.819 10:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.819 10:19:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.078 10:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:38.078 10:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:38.078 true 00:11:38.078 10:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:38.078 10:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.338 10:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.597 10:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:38.597 10:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:38.597 true 00:11:38.597 10:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:38.597 10:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.857 10:19:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.116 10:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:39.116 10:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:39.116 true 00:11:39.116 10:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:39.116 10:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.375 10:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:39.635 10:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:39.635 10:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:39.635 true 00:11:39.635 10:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:39.635 10:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.894 10:19:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.155 10:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:40.155 10:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:40.155 true 00:11:40.155 10:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:40.155 10:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.414 10:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.414 10:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:40.415 10:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:40.675 true 00:11:40.675 10:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:40.675 10:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.935 10:19:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.935 10:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:40.935 10:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:41.195 true 00:11:41.195 10:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:41.195 10:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.456 10:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.456 10:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:41.456 10:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:41.717 true 00:11:41.717 10:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:41.717 10:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.979 10:19:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.979 10:19:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:41.979 10:19:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:42.240 true 00:11:42.240 10:19:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:42.240 10:19:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.240 10:19:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:42.501 10:19:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:42.501 10:19:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:42.501 true 00:11:42.762 10:19:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:42.762 10:19:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.762 10:19:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.023 10:19:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:43.023 10:19:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:43.023 true 00:11:43.023 10:19:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:43.023 10:19:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.284 10:19:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.545 10:19:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:43.545 10:19:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:43.545 true 00:11:43.545 10:19:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:43.545 10:19:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.807 10:19:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.068 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:44.068 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:44.068 true 00:11:44.068 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:44.068 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.328 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.328 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:44.328 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:44.590 true 00:11:44.590 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:44.590 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.850 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.850 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:44.850 10:19:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:45.111 true 00:11:45.111 10:19:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:45.111 10:19:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.372 10:19:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.372 10:19:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:45.372 10:19:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:45.632 true 00:11:45.632 10:19:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:45.632 10:19:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:45.632 10:19:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.892 10:19:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:11:45.892 10:19:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:46.153 true 00:11:46.153 10:19:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:46.153 10:19:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.153 10:19:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.413 10:19:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:11:46.413 10:19:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:46.673 true 00:11:46.673 10:19:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:46.673 10:19:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.673 10:19:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.933 10:19:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:11:46.933 10:19:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:11:46.933 true 00:11:46.933 10:19:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:46.933 10:19:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.193 10:19:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.453 10:19:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:11:47.454 10:19:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:11:47.454 true 00:11:47.454 10:19:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:47.454 10:19:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.714 10:19:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.974 10:19:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:11:47.974 10:19:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:11:47.974 true 00:11:47.974 10:19:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:47.975 10:19:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.235 10:19:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.497 10:19:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:11:48.497 10:19:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:11:48.497 true 00:11:48.497 10:19:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:48.497 10:19:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:48.758 10:19:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:48.758 10:19:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:11:48.758 10:19:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:11:49.019 true 00:11:49.019 10:19:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:49.019 10:19:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.279 10:19:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.279 10:19:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:11:49.279 10:19:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:11:49.538 true 00:11:49.538 10:19:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:49.538 10:19:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.797 10:19:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.797 10:19:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:11:49.797 10:19:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:11:50.057 true 00:11:50.057 10:19:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:50.057 10:19:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.317 10:19:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.317 10:19:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:11:50.317 10:19:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:11:50.576 true 00:11:50.576 10:19:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:50.576 10:19:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.837 10:19:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.837 10:19:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:11:50.837 10:19:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:11:51.097 true 00:11:51.097 10:19:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:51.097 10:19:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.358 10:19:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.358 10:19:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:11:51.358 10:19:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:11:51.618 true 00:11:51.618 10:19:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:51.618 10:19:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:51.878 10:19:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.878 10:19:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:11:51.878 10:19:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:11:52.138 true 00:11:52.138 10:19:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:52.138 10:19:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.138 10:19:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.399 10:19:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:11:52.399 10:19:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:11:52.659 true 00:11:52.659 10:19:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:52.659 10:19:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.659 10:19:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.917 10:19:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1063 00:11:52.918 10:19:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:11:53.177 true 00:11:53.177 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:53.177 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.177 Initializing NVMe Controllers 00:11:53.177 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:53.177 Controller IO queue size 128, less than required. 00:11:53.177 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:53.177 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:53.177 Initialization complete. Launching workers. 00:11:53.177 ======================================================== 00:11:53.177 Latency(us) 00:11:53.177 Device Information : IOPS MiB/s Average min max 00:11:53.177 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48159.90 23.52 2657.64 1087.71 3182.45 00:11:53.177 ======================================================== 00:11:53.177 Total : 48159.90 23.52 2657.64 1087.71 3182.45 00:11:53.177 00:11:53.177 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.437 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1064 00:11:53.437 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1064 00:11:53.696 true 00:11:53.696 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2815500 00:11:53.696 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2815500) - No such process 00:11:53.696 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2815500 00:11:53.696 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.696 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:53.956 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:53.956 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:53.956 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:53.956 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:53.956 10:19:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:53.956 null0 00:11:53.956 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:53.956 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:53.956 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:54.215 null1 00:11:54.215 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.215 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.215 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:54.475 null2 00:11:54.475 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.475 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.475 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:54.475 null3 00:11:54.475 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.475 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.475 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:54.735 null4 00:11:54.735 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.735 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.735 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:54.994 null5 00:11:54.994 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.994 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.994 10:19:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:54.994 null6 00:11:54.994 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.994 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.994 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:55.254 null7 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.254 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2822320 2822322 2822324 2822327 2822330 2822333 2822336 2822339 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:55.255 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.515 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.516 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:55.783 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:56.044 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.044 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.044 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:56.044 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.044 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.044 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:56.044 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.044 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.044 10:19:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.044 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.304 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.564 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.825 10:19:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.085 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.346 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:57.606 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:57.866 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.866 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:57.866 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.866 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:57.867 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.867 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.867 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.867 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.867 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.867 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.867 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.867 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.867 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:57.867 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.867 10:19:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:57.867 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.867 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.867 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:57.867 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.128 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.388 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:58.389 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.389 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.389 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.389 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:58.389 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:58.389 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:58.389 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:58.389 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.389 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.389 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:58.649 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.649 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.650 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.910 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:58.911 rmmod nvme_rdma 00:11:58.911 rmmod nvme_fabrics 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2815036 ']' 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2815036 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 2815036 ']' 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 2815036 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2815036 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2815036' 00:11:58.911 killing process with pid 2815036 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 2815036 00:11:58.911 10:19:35 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 2815036 00:11:59.172 10:19:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:59.172 10:19:36 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:59.172 00:11:59.172 real 0m47.917s 00:11:59.172 user 3m21.753s 00:11:59.172 sys 0m15.188s 00:11:59.172 10:19:36 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:59.172 10:19:36 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.172 ************************************ 00:11:59.172 END TEST nvmf_ns_hotplug_stress 00:11:59.172 ************************************ 00:11:59.172 10:19:36 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:11:59.172 10:19:36 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:59.172 10:19:36 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:59.172 10:19:36 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:59.172 10:19:36 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:59.172 ************************************ 00:11:59.172 START TEST nvmf_connect_stress 00:11:59.172 ************************************ 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:59.172 * Looking for test storage... 00:11:59.172 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:59.172 10:19:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:07.316 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:07.316 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:07.316 Found net devices under 0000:98:00.0: mlx_0_0 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:07.316 Found net devices under 0000:98:00.1: mlx_0_1 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.316 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:07.317 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:07.317 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:12:07.317 altname enp152s0f0np0 00:12:07.317 altname ens817f0np0 00:12:07.317 inet 192.168.100.8/24 scope global mlx_0_0 00:12:07.317 valid_lft forever preferred_lft forever 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:07.317 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:07.317 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:12:07.317 altname enp152s0f1np1 00:12:07.317 altname ens817f1np1 00:12:07.317 inet 192.168.100.9/24 scope global mlx_0_1 00:12:07.317 valid_lft forever preferred_lft forever 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:07.317 192.168.100.9' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:07.317 192.168.100.9' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:07.317 192.168.100.9' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2827459 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2827459 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 2827459 ']' 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:07.317 10:19:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.317 [2024-07-15 10:19:44.406512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:07.317 [2024-07-15 10:19:44.406587] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.317 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.317 [2024-07-15 10:19:44.493079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:07.580 [2024-07-15 10:19:44.588610] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.580 [2024-07-15 10:19:44.588674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.580 [2024-07-15 10:19:44.588682] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.580 [2024-07-15 10:19:44.588689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.580 [2024-07-15 10:19:44.588695] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.580 [2024-07-15 10:19:44.588847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.580 [2024-07-15 10:19:44.588988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.580 [2024-07-15 10:19:44.588988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.149 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.150 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:12:08.150 10:19:45 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.150 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.150 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.150 10:19:45 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.150 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:08.150 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.150 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.150 [2024-07-15 10:19:45.276874] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x156c920/0x1570e10) succeed. 00:12:08.150 [2024-07-15 10:19:45.290924] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x156dec0/0x15b24a0) succeed. 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.410 [2024-07-15 10:19:45.406472] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.410 NULL1 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2827587 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.410 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.411 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.671 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.671 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:08.671 10:19:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.671 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.671 10:19:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.242 10:19:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.242 10:19:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:09.242 10:19:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.242 10:19:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.242 10:19:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.502 10:19:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.502 10:19:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:09.502 10:19:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.502 10:19:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.502 10:19:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.762 10:19:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.762 10:19:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:09.762 10:19:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.762 10:19:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.762 10:19:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.023 10:19:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.023 10:19:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:10.023 10:19:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.023 10:19:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.023 10:19:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.594 10:19:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.594 10:19:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:10.594 10:19:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.594 10:19:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.594 10:19:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.854 10:19:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.854 10:19:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:10.854 10:19:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.854 10:19:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.854 10:19:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.115 10:19:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.115 10:19:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:11.115 10:19:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.115 10:19:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.115 10:19:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.376 10:19:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.376 10:19:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:11.376 10:19:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.376 10:19:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.376 10:19:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.637 10:19:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.637 10:19:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:11.637 10:19:48 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.637 10:19:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.637 10:19:48 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.207 10:19:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.207 10:19:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:12.207 10:19:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.208 10:19:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.208 10:19:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.468 10:19:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.469 10:19:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:12.469 10:19:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.469 10:19:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.469 10:19:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.730 10:19:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.730 10:19:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:12.730 10:19:49 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.730 10:19:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.730 10:19:49 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.990 10:19:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.990 10:19:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:12.990 10:19:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.990 10:19:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.990 10:19:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.250 10:19:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.250 10:19:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:13.250 10:19:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.250 10:19:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.250 10:19:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.821 10:19:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.821 10:19:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:13.821 10:19:50 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.821 10:19:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.821 10:19:50 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.080 10:19:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.080 10:19:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:14.080 10:19:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.080 10:19:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.080 10:19:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 10:19:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.340 10:19:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:14.340 10:19:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.340 10:19:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.340 10:19:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.599 10:19:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.599 10:19:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:14.599 10:19:51 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.599 10:19:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.599 10:19:51 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.228 10:19:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.228 10:19:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:15.229 10:19:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.229 10:19:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.229 10:19:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.229 10:19:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.229 10:19:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:15.229 10:19:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.229 10:19:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.229 10:19:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.796 10:19:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:15.796 10:19:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:15.796 10:19:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.796 10:19:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:15.796 10:19:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.054 10:19:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.054 10:19:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:16.055 10:19:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.055 10:19:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.055 10:19:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.314 10:19:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.314 10:19:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:16.314 10:19:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.314 10:19:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.314 10:19:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.573 10:19:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.573 10:19:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:16.573 10:19:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.573 10:19:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.573 10:19:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.143 10:19:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.143 10:19:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:17.143 10:19:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.143 10:19:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.143 10:19:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.403 10:19:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.403 10:19:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:17.403 10:19:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.403 10:19:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.403 10:19:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.662 10:19:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.662 10:19:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:17.662 10:19:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.662 10:19:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.662 10:19:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.922 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.922 10:19:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:17.922 10:19:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.922 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.922 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.182 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.182 10:19:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:18.182 10:19:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.182 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.182 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.443 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2827587 00:12:18.703 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2827587) - No such process 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2827587 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:18.703 rmmod nvme_rdma 00:12:18.703 rmmod nvme_fabrics 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2827459 ']' 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2827459 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 2827459 ']' 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 2827459 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2827459 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2827459' 00:12:18.703 killing process with pid 2827459 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 2827459 00:12:18.703 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 2827459 00:12:18.964 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:18.964 10:19:55 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:18.964 00:12:18.964 real 0m19.757s 00:12:18.964 user 0m42.012s 00:12:18.964 sys 0m7.409s 00:12:18.964 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.964 10:19:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.964 ************************************ 00:12:18.964 END TEST nvmf_connect_stress 00:12:18.964 ************************************ 00:12:18.964 10:19:56 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:18.964 10:19:56 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:18.964 10:19:56 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:18.964 10:19:56 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.964 10:19:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:18.964 ************************************ 00:12:18.964 START TEST nvmf_fused_ordering 00:12:18.964 ************************************ 00:12:18.964 10:19:56 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:18.964 * Looking for test storage... 00:12:18.964 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:18.964 10:19:56 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.225 10:19:56 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.226 10:19:56 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:27.366 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:27.366 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:27.366 Found net devices under 0000:98:00.0: mlx_0_0 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:27.366 Found net devices under 0000:98:00.1: mlx_0_1 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:12:27.366 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:27.367 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:27.367 10:20:03 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:27.367 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:27.367 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:12:27.367 altname enp152s0f0np0 00:12:27.367 altname ens817f0np0 00:12:27.367 inet 192.168.100.8/24 scope global mlx_0_0 00:12:27.367 valid_lft forever preferred_lft forever 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:27.367 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:27.367 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:12:27.367 altname enp152s0f1np1 00:12:27.367 altname ens817f1np1 00:12:27.367 inet 192.168.100.9/24 scope global mlx_0_1 00:12:27.367 valid_lft forever preferred_lft forever 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:27.367 192.168.100.9' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:27.367 192.168.100.9' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:27.367 192.168.100.9' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2833951 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2833951 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 2833951 ']' 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.367 10:20:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.367 [2024-07-15 10:20:04.266999] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:27.367 [2024-07-15 10:20:04.267053] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.367 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.367 [2024-07-15 10:20:04.350307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.367 [2024-07-15 10:20:04.423608] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.367 [2024-07-15 10:20:04.423660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.367 [2024-07-15 10:20:04.423668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.367 [2024-07-15 10:20:04.423675] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.367 [2024-07-15 10:20:04.423681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.368 [2024-07-15 10:20:04.423705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.939 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.939 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:27.939 10:20:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.939 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:27.939 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.939 10:20:05 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.939 10:20:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:27.939 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.939 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:27.939 [2024-07-15 10:20:05.126196] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f8b360/0x1f8f850) succeed. 00:12:28.200 [2024-07-15 10:20:05.140295] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f8c860/0x1fd0ee0) succeed. 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.200 [2024-07-15 10:20:05.220542] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.200 NULL1 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.200 10:20:05 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:28.200 [2024-07-15 10:20:05.290160] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:28.200 [2024-07-15 10:20:05.290214] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834271 ] 00:12:28.200 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.461 Attached to nqn.2016-06.io.spdk:cnode1 00:12:28.461 Namespace ID: 1 size: 1GB 00:12:28.461 fused_ordering(0) 00:12:28.461 fused_ordering(1) 00:12:28.461 fused_ordering(2) 00:12:28.461 fused_ordering(3) 00:12:28.461 fused_ordering(4) 00:12:28.461 fused_ordering(5) 00:12:28.461 fused_ordering(6) 00:12:28.461 fused_ordering(7) 00:12:28.461 fused_ordering(8) 00:12:28.461 fused_ordering(9) 00:12:28.461 fused_ordering(10) 00:12:28.461 fused_ordering(11) 00:12:28.461 fused_ordering(12) 00:12:28.461 fused_ordering(13) 00:12:28.461 fused_ordering(14) 00:12:28.461 fused_ordering(15) 00:12:28.461 fused_ordering(16) 00:12:28.461 fused_ordering(17) 00:12:28.461 fused_ordering(18) 00:12:28.461 fused_ordering(19) 00:12:28.461 fused_ordering(20) 00:12:28.461 fused_ordering(21) 00:12:28.461 fused_ordering(22) 00:12:28.461 fused_ordering(23) 00:12:28.461 fused_ordering(24) 00:12:28.461 fused_ordering(25) 00:12:28.461 fused_ordering(26) 00:12:28.461 fused_ordering(27) 00:12:28.461 fused_ordering(28) 00:12:28.461 fused_ordering(29) 00:12:28.461 fused_ordering(30) 00:12:28.461 fused_ordering(31) 00:12:28.461 fused_ordering(32) 00:12:28.461 fused_ordering(33) 00:12:28.461 fused_ordering(34) 00:12:28.461 fused_ordering(35) 00:12:28.461 fused_ordering(36) 00:12:28.461 fused_ordering(37) 00:12:28.461 fused_ordering(38) 00:12:28.461 fused_ordering(39) 00:12:28.461 fused_ordering(40) 00:12:28.461 fused_ordering(41) 00:12:28.461 fused_ordering(42) 00:12:28.461 fused_ordering(43) 00:12:28.461 fused_ordering(44) 00:12:28.461 fused_ordering(45) 00:12:28.461 fused_ordering(46) 00:12:28.461 fused_ordering(47) 00:12:28.461 fused_ordering(48) 00:12:28.461 fused_ordering(49) 00:12:28.461 fused_ordering(50) 00:12:28.461 fused_ordering(51) 00:12:28.461 fused_ordering(52) 00:12:28.461 fused_ordering(53) 00:12:28.461 fused_ordering(54) 00:12:28.461 fused_ordering(55) 00:12:28.461 fused_ordering(56) 00:12:28.461 fused_ordering(57) 00:12:28.461 fused_ordering(58) 00:12:28.461 fused_ordering(59) 00:12:28.461 fused_ordering(60) 00:12:28.461 fused_ordering(61) 00:12:28.461 fused_ordering(62) 00:12:28.461 fused_ordering(63) 00:12:28.461 fused_ordering(64) 00:12:28.461 fused_ordering(65) 00:12:28.461 fused_ordering(66) 00:12:28.461 fused_ordering(67) 00:12:28.461 fused_ordering(68) 00:12:28.461 fused_ordering(69) 00:12:28.461 fused_ordering(70) 00:12:28.461 fused_ordering(71) 00:12:28.461 fused_ordering(72) 00:12:28.461 fused_ordering(73) 00:12:28.461 fused_ordering(74) 00:12:28.461 fused_ordering(75) 00:12:28.461 fused_ordering(76) 00:12:28.461 fused_ordering(77) 00:12:28.461 fused_ordering(78) 00:12:28.461 fused_ordering(79) 00:12:28.461 fused_ordering(80) 00:12:28.461 fused_ordering(81) 00:12:28.461 fused_ordering(82) 00:12:28.461 fused_ordering(83) 00:12:28.461 fused_ordering(84) 00:12:28.461 fused_ordering(85) 00:12:28.461 fused_ordering(86) 00:12:28.461 fused_ordering(87) 00:12:28.461 fused_ordering(88) 00:12:28.461 fused_ordering(89) 00:12:28.461 fused_ordering(90) 00:12:28.461 fused_ordering(91) 00:12:28.461 fused_ordering(92) 00:12:28.461 fused_ordering(93) 00:12:28.461 fused_ordering(94) 00:12:28.461 fused_ordering(95) 00:12:28.461 fused_ordering(96) 00:12:28.461 fused_ordering(97) 00:12:28.461 fused_ordering(98) 00:12:28.461 fused_ordering(99) 00:12:28.461 fused_ordering(100) 00:12:28.461 fused_ordering(101) 00:12:28.461 fused_ordering(102) 00:12:28.461 fused_ordering(103) 00:12:28.461 fused_ordering(104) 00:12:28.461 fused_ordering(105) 00:12:28.461 fused_ordering(106) 00:12:28.461 fused_ordering(107) 00:12:28.461 fused_ordering(108) 00:12:28.461 fused_ordering(109) 00:12:28.461 fused_ordering(110) 00:12:28.461 fused_ordering(111) 00:12:28.461 fused_ordering(112) 00:12:28.461 fused_ordering(113) 00:12:28.461 fused_ordering(114) 00:12:28.461 fused_ordering(115) 00:12:28.461 fused_ordering(116) 00:12:28.461 fused_ordering(117) 00:12:28.461 fused_ordering(118) 00:12:28.461 fused_ordering(119) 00:12:28.461 fused_ordering(120) 00:12:28.461 fused_ordering(121) 00:12:28.461 fused_ordering(122) 00:12:28.461 fused_ordering(123) 00:12:28.462 fused_ordering(124) 00:12:28.462 fused_ordering(125) 00:12:28.462 fused_ordering(126) 00:12:28.462 fused_ordering(127) 00:12:28.462 fused_ordering(128) 00:12:28.462 fused_ordering(129) 00:12:28.462 fused_ordering(130) 00:12:28.462 fused_ordering(131) 00:12:28.462 fused_ordering(132) 00:12:28.462 fused_ordering(133) 00:12:28.462 fused_ordering(134) 00:12:28.462 fused_ordering(135) 00:12:28.462 fused_ordering(136) 00:12:28.462 fused_ordering(137) 00:12:28.462 fused_ordering(138) 00:12:28.462 fused_ordering(139) 00:12:28.462 fused_ordering(140) 00:12:28.462 fused_ordering(141) 00:12:28.462 fused_ordering(142) 00:12:28.462 fused_ordering(143) 00:12:28.462 fused_ordering(144) 00:12:28.462 fused_ordering(145) 00:12:28.462 fused_ordering(146) 00:12:28.462 fused_ordering(147) 00:12:28.462 fused_ordering(148) 00:12:28.462 fused_ordering(149) 00:12:28.462 fused_ordering(150) 00:12:28.462 fused_ordering(151) 00:12:28.462 fused_ordering(152) 00:12:28.462 fused_ordering(153) 00:12:28.462 fused_ordering(154) 00:12:28.462 fused_ordering(155) 00:12:28.462 fused_ordering(156) 00:12:28.462 fused_ordering(157) 00:12:28.462 fused_ordering(158) 00:12:28.462 fused_ordering(159) 00:12:28.462 fused_ordering(160) 00:12:28.462 fused_ordering(161) 00:12:28.462 fused_ordering(162) 00:12:28.462 fused_ordering(163) 00:12:28.462 fused_ordering(164) 00:12:28.462 fused_ordering(165) 00:12:28.462 fused_ordering(166) 00:12:28.462 fused_ordering(167) 00:12:28.462 fused_ordering(168) 00:12:28.462 fused_ordering(169) 00:12:28.462 fused_ordering(170) 00:12:28.462 fused_ordering(171) 00:12:28.462 fused_ordering(172) 00:12:28.462 fused_ordering(173) 00:12:28.462 fused_ordering(174) 00:12:28.462 fused_ordering(175) 00:12:28.462 fused_ordering(176) 00:12:28.462 fused_ordering(177) 00:12:28.462 fused_ordering(178) 00:12:28.462 fused_ordering(179) 00:12:28.462 fused_ordering(180) 00:12:28.462 fused_ordering(181) 00:12:28.462 fused_ordering(182) 00:12:28.462 fused_ordering(183) 00:12:28.462 fused_ordering(184) 00:12:28.462 fused_ordering(185) 00:12:28.462 fused_ordering(186) 00:12:28.462 fused_ordering(187) 00:12:28.462 fused_ordering(188) 00:12:28.462 fused_ordering(189) 00:12:28.462 fused_ordering(190) 00:12:28.462 fused_ordering(191) 00:12:28.462 fused_ordering(192) 00:12:28.462 fused_ordering(193) 00:12:28.462 fused_ordering(194) 00:12:28.462 fused_ordering(195) 00:12:28.462 fused_ordering(196) 00:12:28.462 fused_ordering(197) 00:12:28.462 fused_ordering(198) 00:12:28.462 fused_ordering(199) 00:12:28.462 fused_ordering(200) 00:12:28.462 fused_ordering(201) 00:12:28.462 fused_ordering(202) 00:12:28.462 fused_ordering(203) 00:12:28.462 fused_ordering(204) 00:12:28.462 fused_ordering(205) 00:12:28.462 fused_ordering(206) 00:12:28.462 fused_ordering(207) 00:12:28.462 fused_ordering(208) 00:12:28.462 fused_ordering(209) 00:12:28.462 fused_ordering(210) 00:12:28.462 fused_ordering(211) 00:12:28.462 fused_ordering(212) 00:12:28.462 fused_ordering(213) 00:12:28.462 fused_ordering(214) 00:12:28.462 fused_ordering(215) 00:12:28.462 fused_ordering(216) 00:12:28.462 fused_ordering(217) 00:12:28.462 fused_ordering(218) 00:12:28.462 fused_ordering(219) 00:12:28.462 fused_ordering(220) 00:12:28.462 fused_ordering(221) 00:12:28.462 fused_ordering(222) 00:12:28.462 fused_ordering(223) 00:12:28.462 fused_ordering(224) 00:12:28.462 fused_ordering(225) 00:12:28.462 fused_ordering(226) 00:12:28.462 fused_ordering(227) 00:12:28.462 fused_ordering(228) 00:12:28.462 fused_ordering(229) 00:12:28.462 fused_ordering(230) 00:12:28.462 fused_ordering(231) 00:12:28.462 fused_ordering(232) 00:12:28.462 fused_ordering(233) 00:12:28.462 fused_ordering(234) 00:12:28.462 fused_ordering(235) 00:12:28.462 fused_ordering(236) 00:12:28.462 fused_ordering(237) 00:12:28.462 fused_ordering(238) 00:12:28.462 fused_ordering(239) 00:12:28.462 fused_ordering(240) 00:12:28.462 fused_ordering(241) 00:12:28.462 fused_ordering(242) 00:12:28.462 fused_ordering(243) 00:12:28.462 fused_ordering(244) 00:12:28.462 fused_ordering(245) 00:12:28.462 fused_ordering(246) 00:12:28.462 fused_ordering(247) 00:12:28.462 fused_ordering(248) 00:12:28.462 fused_ordering(249) 00:12:28.462 fused_ordering(250) 00:12:28.462 fused_ordering(251) 00:12:28.462 fused_ordering(252) 00:12:28.462 fused_ordering(253) 00:12:28.462 fused_ordering(254) 00:12:28.462 fused_ordering(255) 00:12:28.462 fused_ordering(256) 00:12:28.462 fused_ordering(257) 00:12:28.462 fused_ordering(258) 00:12:28.462 fused_ordering(259) 00:12:28.462 fused_ordering(260) 00:12:28.462 fused_ordering(261) 00:12:28.462 fused_ordering(262) 00:12:28.462 fused_ordering(263) 00:12:28.462 fused_ordering(264) 00:12:28.462 fused_ordering(265) 00:12:28.462 fused_ordering(266) 00:12:28.462 fused_ordering(267) 00:12:28.462 fused_ordering(268) 00:12:28.462 fused_ordering(269) 00:12:28.462 fused_ordering(270) 00:12:28.462 fused_ordering(271) 00:12:28.462 fused_ordering(272) 00:12:28.462 fused_ordering(273) 00:12:28.462 fused_ordering(274) 00:12:28.462 fused_ordering(275) 00:12:28.462 fused_ordering(276) 00:12:28.462 fused_ordering(277) 00:12:28.462 fused_ordering(278) 00:12:28.462 fused_ordering(279) 00:12:28.462 fused_ordering(280) 00:12:28.462 fused_ordering(281) 00:12:28.462 fused_ordering(282) 00:12:28.462 fused_ordering(283) 00:12:28.462 fused_ordering(284) 00:12:28.462 fused_ordering(285) 00:12:28.462 fused_ordering(286) 00:12:28.462 fused_ordering(287) 00:12:28.462 fused_ordering(288) 00:12:28.462 fused_ordering(289) 00:12:28.462 fused_ordering(290) 00:12:28.462 fused_ordering(291) 00:12:28.462 fused_ordering(292) 00:12:28.462 fused_ordering(293) 00:12:28.462 fused_ordering(294) 00:12:28.462 fused_ordering(295) 00:12:28.462 fused_ordering(296) 00:12:28.462 fused_ordering(297) 00:12:28.462 fused_ordering(298) 00:12:28.462 fused_ordering(299) 00:12:28.462 fused_ordering(300) 00:12:28.462 fused_ordering(301) 00:12:28.462 fused_ordering(302) 00:12:28.462 fused_ordering(303) 00:12:28.462 fused_ordering(304) 00:12:28.462 fused_ordering(305) 00:12:28.462 fused_ordering(306) 00:12:28.462 fused_ordering(307) 00:12:28.462 fused_ordering(308) 00:12:28.462 fused_ordering(309) 00:12:28.462 fused_ordering(310) 00:12:28.462 fused_ordering(311) 00:12:28.462 fused_ordering(312) 00:12:28.462 fused_ordering(313) 00:12:28.462 fused_ordering(314) 00:12:28.462 fused_ordering(315) 00:12:28.462 fused_ordering(316) 00:12:28.462 fused_ordering(317) 00:12:28.462 fused_ordering(318) 00:12:28.462 fused_ordering(319) 00:12:28.462 fused_ordering(320) 00:12:28.462 fused_ordering(321) 00:12:28.462 fused_ordering(322) 00:12:28.462 fused_ordering(323) 00:12:28.462 fused_ordering(324) 00:12:28.462 fused_ordering(325) 00:12:28.462 fused_ordering(326) 00:12:28.462 fused_ordering(327) 00:12:28.462 fused_ordering(328) 00:12:28.462 fused_ordering(329) 00:12:28.462 fused_ordering(330) 00:12:28.462 fused_ordering(331) 00:12:28.462 fused_ordering(332) 00:12:28.462 fused_ordering(333) 00:12:28.462 fused_ordering(334) 00:12:28.462 fused_ordering(335) 00:12:28.462 fused_ordering(336) 00:12:28.462 fused_ordering(337) 00:12:28.462 fused_ordering(338) 00:12:28.462 fused_ordering(339) 00:12:28.462 fused_ordering(340) 00:12:28.462 fused_ordering(341) 00:12:28.462 fused_ordering(342) 00:12:28.462 fused_ordering(343) 00:12:28.462 fused_ordering(344) 00:12:28.462 fused_ordering(345) 00:12:28.462 fused_ordering(346) 00:12:28.462 fused_ordering(347) 00:12:28.462 fused_ordering(348) 00:12:28.462 fused_ordering(349) 00:12:28.462 fused_ordering(350) 00:12:28.462 fused_ordering(351) 00:12:28.462 fused_ordering(352) 00:12:28.462 fused_ordering(353) 00:12:28.462 fused_ordering(354) 00:12:28.462 fused_ordering(355) 00:12:28.462 fused_ordering(356) 00:12:28.462 fused_ordering(357) 00:12:28.462 fused_ordering(358) 00:12:28.462 fused_ordering(359) 00:12:28.462 fused_ordering(360) 00:12:28.462 fused_ordering(361) 00:12:28.462 fused_ordering(362) 00:12:28.462 fused_ordering(363) 00:12:28.462 fused_ordering(364) 00:12:28.462 fused_ordering(365) 00:12:28.462 fused_ordering(366) 00:12:28.462 fused_ordering(367) 00:12:28.462 fused_ordering(368) 00:12:28.462 fused_ordering(369) 00:12:28.462 fused_ordering(370) 00:12:28.462 fused_ordering(371) 00:12:28.462 fused_ordering(372) 00:12:28.462 fused_ordering(373) 00:12:28.462 fused_ordering(374) 00:12:28.462 fused_ordering(375) 00:12:28.462 fused_ordering(376) 00:12:28.462 fused_ordering(377) 00:12:28.462 fused_ordering(378) 00:12:28.462 fused_ordering(379) 00:12:28.462 fused_ordering(380) 00:12:28.462 fused_ordering(381) 00:12:28.462 fused_ordering(382) 00:12:28.462 fused_ordering(383) 00:12:28.462 fused_ordering(384) 00:12:28.462 fused_ordering(385) 00:12:28.462 fused_ordering(386) 00:12:28.462 fused_ordering(387) 00:12:28.462 fused_ordering(388) 00:12:28.462 fused_ordering(389) 00:12:28.462 fused_ordering(390) 00:12:28.462 fused_ordering(391) 00:12:28.462 fused_ordering(392) 00:12:28.462 fused_ordering(393) 00:12:28.462 fused_ordering(394) 00:12:28.462 fused_ordering(395) 00:12:28.462 fused_ordering(396) 00:12:28.462 fused_ordering(397) 00:12:28.462 fused_ordering(398) 00:12:28.462 fused_ordering(399) 00:12:28.462 fused_ordering(400) 00:12:28.462 fused_ordering(401) 00:12:28.462 fused_ordering(402) 00:12:28.462 fused_ordering(403) 00:12:28.462 fused_ordering(404) 00:12:28.462 fused_ordering(405) 00:12:28.462 fused_ordering(406) 00:12:28.462 fused_ordering(407) 00:12:28.462 fused_ordering(408) 00:12:28.462 fused_ordering(409) 00:12:28.463 fused_ordering(410) 00:12:28.724 fused_ordering(411) 00:12:28.724 fused_ordering(412) 00:12:28.724 fused_ordering(413) 00:12:28.724 fused_ordering(414) 00:12:28.724 fused_ordering(415) 00:12:28.724 fused_ordering(416) 00:12:28.724 fused_ordering(417) 00:12:28.724 fused_ordering(418) 00:12:28.724 fused_ordering(419) 00:12:28.724 fused_ordering(420) 00:12:28.724 fused_ordering(421) 00:12:28.724 fused_ordering(422) 00:12:28.724 fused_ordering(423) 00:12:28.724 fused_ordering(424) 00:12:28.724 fused_ordering(425) 00:12:28.724 fused_ordering(426) 00:12:28.724 fused_ordering(427) 00:12:28.724 fused_ordering(428) 00:12:28.724 fused_ordering(429) 00:12:28.724 fused_ordering(430) 00:12:28.724 fused_ordering(431) 00:12:28.724 fused_ordering(432) 00:12:28.724 fused_ordering(433) 00:12:28.724 fused_ordering(434) 00:12:28.724 fused_ordering(435) 00:12:28.724 fused_ordering(436) 00:12:28.724 fused_ordering(437) 00:12:28.724 fused_ordering(438) 00:12:28.724 fused_ordering(439) 00:12:28.724 fused_ordering(440) 00:12:28.724 fused_ordering(441) 00:12:28.724 fused_ordering(442) 00:12:28.724 fused_ordering(443) 00:12:28.724 fused_ordering(444) 00:12:28.724 fused_ordering(445) 00:12:28.724 fused_ordering(446) 00:12:28.724 fused_ordering(447) 00:12:28.724 fused_ordering(448) 00:12:28.724 fused_ordering(449) 00:12:28.724 fused_ordering(450) 00:12:28.724 fused_ordering(451) 00:12:28.724 fused_ordering(452) 00:12:28.724 fused_ordering(453) 00:12:28.724 fused_ordering(454) 00:12:28.724 fused_ordering(455) 00:12:28.724 fused_ordering(456) 00:12:28.724 fused_ordering(457) 00:12:28.724 fused_ordering(458) 00:12:28.724 fused_ordering(459) 00:12:28.724 fused_ordering(460) 00:12:28.724 fused_ordering(461) 00:12:28.724 fused_ordering(462) 00:12:28.724 fused_ordering(463) 00:12:28.724 fused_ordering(464) 00:12:28.724 fused_ordering(465) 00:12:28.724 fused_ordering(466) 00:12:28.724 fused_ordering(467) 00:12:28.724 fused_ordering(468) 00:12:28.724 fused_ordering(469) 00:12:28.724 fused_ordering(470) 00:12:28.724 fused_ordering(471) 00:12:28.724 fused_ordering(472) 00:12:28.724 fused_ordering(473) 00:12:28.724 fused_ordering(474) 00:12:28.724 fused_ordering(475) 00:12:28.724 fused_ordering(476) 00:12:28.724 fused_ordering(477) 00:12:28.724 fused_ordering(478) 00:12:28.724 fused_ordering(479) 00:12:28.724 fused_ordering(480) 00:12:28.724 fused_ordering(481) 00:12:28.724 fused_ordering(482) 00:12:28.724 fused_ordering(483) 00:12:28.724 fused_ordering(484) 00:12:28.724 fused_ordering(485) 00:12:28.724 fused_ordering(486) 00:12:28.724 fused_ordering(487) 00:12:28.724 fused_ordering(488) 00:12:28.724 fused_ordering(489) 00:12:28.724 fused_ordering(490) 00:12:28.724 fused_ordering(491) 00:12:28.724 fused_ordering(492) 00:12:28.724 fused_ordering(493) 00:12:28.724 fused_ordering(494) 00:12:28.724 fused_ordering(495) 00:12:28.724 fused_ordering(496) 00:12:28.724 fused_ordering(497) 00:12:28.724 fused_ordering(498) 00:12:28.724 fused_ordering(499) 00:12:28.724 fused_ordering(500) 00:12:28.724 fused_ordering(501) 00:12:28.724 fused_ordering(502) 00:12:28.724 fused_ordering(503) 00:12:28.724 fused_ordering(504) 00:12:28.724 fused_ordering(505) 00:12:28.724 fused_ordering(506) 00:12:28.724 fused_ordering(507) 00:12:28.724 fused_ordering(508) 00:12:28.724 fused_ordering(509) 00:12:28.724 fused_ordering(510) 00:12:28.724 fused_ordering(511) 00:12:28.724 fused_ordering(512) 00:12:28.724 fused_ordering(513) 00:12:28.724 fused_ordering(514) 00:12:28.724 fused_ordering(515) 00:12:28.724 fused_ordering(516) 00:12:28.724 fused_ordering(517) 00:12:28.724 fused_ordering(518) 00:12:28.724 fused_ordering(519) 00:12:28.724 fused_ordering(520) 00:12:28.724 fused_ordering(521) 00:12:28.724 fused_ordering(522) 00:12:28.724 fused_ordering(523) 00:12:28.724 fused_ordering(524) 00:12:28.724 fused_ordering(525) 00:12:28.724 fused_ordering(526) 00:12:28.724 fused_ordering(527) 00:12:28.724 fused_ordering(528) 00:12:28.724 fused_ordering(529) 00:12:28.725 fused_ordering(530) 00:12:28.725 fused_ordering(531) 00:12:28.725 fused_ordering(532) 00:12:28.725 fused_ordering(533) 00:12:28.725 fused_ordering(534) 00:12:28.725 fused_ordering(535) 00:12:28.725 fused_ordering(536) 00:12:28.725 fused_ordering(537) 00:12:28.725 fused_ordering(538) 00:12:28.725 fused_ordering(539) 00:12:28.725 fused_ordering(540) 00:12:28.725 fused_ordering(541) 00:12:28.725 fused_ordering(542) 00:12:28.725 fused_ordering(543) 00:12:28.725 fused_ordering(544) 00:12:28.725 fused_ordering(545) 00:12:28.725 fused_ordering(546) 00:12:28.725 fused_ordering(547) 00:12:28.725 fused_ordering(548) 00:12:28.725 fused_ordering(549) 00:12:28.725 fused_ordering(550) 00:12:28.725 fused_ordering(551) 00:12:28.725 fused_ordering(552) 00:12:28.725 fused_ordering(553) 00:12:28.725 fused_ordering(554) 00:12:28.725 fused_ordering(555) 00:12:28.725 fused_ordering(556) 00:12:28.725 fused_ordering(557) 00:12:28.725 fused_ordering(558) 00:12:28.725 fused_ordering(559) 00:12:28.725 fused_ordering(560) 00:12:28.725 fused_ordering(561) 00:12:28.725 fused_ordering(562) 00:12:28.725 fused_ordering(563) 00:12:28.725 fused_ordering(564) 00:12:28.725 fused_ordering(565) 00:12:28.725 fused_ordering(566) 00:12:28.725 fused_ordering(567) 00:12:28.725 fused_ordering(568) 00:12:28.725 fused_ordering(569) 00:12:28.725 fused_ordering(570) 00:12:28.725 fused_ordering(571) 00:12:28.725 fused_ordering(572) 00:12:28.725 fused_ordering(573) 00:12:28.725 fused_ordering(574) 00:12:28.725 fused_ordering(575) 00:12:28.725 fused_ordering(576) 00:12:28.725 fused_ordering(577) 00:12:28.725 fused_ordering(578) 00:12:28.725 fused_ordering(579) 00:12:28.725 fused_ordering(580) 00:12:28.725 fused_ordering(581) 00:12:28.725 fused_ordering(582) 00:12:28.725 fused_ordering(583) 00:12:28.725 fused_ordering(584) 00:12:28.725 fused_ordering(585) 00:12:28.725 fused_ordering(586) 00:12:28.725 fused_ordering(587) 00:12:28.725 fused_ordering(588) 00:12:28.725 fused_ordering(589) 00:12:28.725 fused_ordering(590) 00:12:28.725 fused_ordering(591) 00:12:28.725 fused_ordering(592) 00:12:28.725 fused_ordering(593) 00:12:28.725 fused_ordering(594) 00:12:28.725 fused_ordering(595) 00:12:28.725 fused_ordering(596) 00:12:28.725 fused_ordering(597) 00:12:28.725 fused_ordering(598) 00:12:28.725 fused_ordering(599) 00:12:28.725 fused_ordering(600) 00:12:28.725 fused_ordering(601) 00:12:28.725 fused_ordering(602) 00:12:28.725 fused_ordering(603) 00:12:28.725 fused_ordering(604) 00:12:28.725 fused_ordering(605) 00:12:28.725 fused_ordering(606) 00:12:28.725 fused_ordering(607) 00:12:28.725 fused_ordering(608) 00:12:28.725 fused_ordering(609) 00:12:28.725 fused_ordering(610) 00:12:28.725 fused_ordering(611) 00:12:28.725 fused_ordering(612) 00:12:28.725 fused_ordering(613) 00:12:28.725 fused_ordering(614) 00:12:28.725 fused_ordering(615) 00:12:28.725 fused_ordering(616) 00:12:28.725 fused_ordering(617) 00:12:28.725 fused_ordering(618) 00:12:28.725 fused_ordering(619) 00:12:28.725 fused_ordering(620) 00:12:28.725 fused_ordering(621) 00:12:28.725 fused_ordering(622) 00:12:28.725 fused_ordering(623) 00:12:28.725 fused_ordering(624) 00:12:28.725 fused_ordering(625) 00:12:28.725 fused_ordering(626) 00:12:28.725 fused_ordering(627) 00:12:28.725 fused_ordering(628) 00:12:28.725 fused_ordering(629) 00:12:28.725 fused_ordering(630) 00:12:28.725 fused_ordering(631) 00:12:28.725 fused_ordering(632) 00:12:28.725 fused_ordering(633) 00:12:28.725 fused_ordering(634) 00:12:28.725 fused_ordering(635) 00:12:28.725 fused_ordering(636) 00:12:28.725 fused_ordering(637) 00:12:28.725 fused_ordering(638) 00:12:28.725 fused_ordering(639) 00:12:28.725 fused_ordering(640) 00:12:28.725 fused_ordering(641) 00:12:28.725 fused_ordering(642) 00:12:28.725 fused_ordering(643) 00:12:28.725 fused_ordering(644) 00:12:28.725 fused_ordering(645) 00:12:28.725 fused_ordering(646) 00:12:28.725 fused_ordering(647) 00:12:28.725 fused_ordering(648) 00:12:28.725 fused_ordering(649) 00:12:28.725 fused_ordering(650) 00:12:28.725 fused_ordering(651) 00:12:28.725 fused_ordering(652) 00:12:28.725 fused_ordering(653) 00:12:28.725 fused_ordering(654) 00:12:28.725 fused_ordering(655) 00:12:28.725 fused_ordering(656) 00:12:28.725 fused_ordering(657) 00:12:28.725 fused_ordering(658) 00:12:28.725 fused_ordering(659) 00:12:28.725 fused_ordering(660) 00:12:28.725 fused_ordering(661) 00:12:28.725 fused_ordering(662) 00:12:28.725 fused_ordering(663) 00:12:28.725 fused_ordering(664) 00:12:28.725 fused_ordering(665) 00:12:28.725 fused_ordering(666) 00:12:28.725 fused_ordering(667) 00:12:28.725 fused_ordering(668) 00:12:28.725 fused_ordering(669) 00:12:28.725 fused_ordering(670) 00:12:28.725 fused_ordering(671) 00:12:28.725 fused_ordering(672) 00:12:28.725 fused_ordering(673) 00:12:28.725 fused_ordering(674) 00:12:28.725 fused_ordering(675) 00:12:28.725 fused_ordering(676) 00:12:28.725 fused_ordering(677) 00:12:28.725 fused_ordering(678) 00:12:28.725 fused_ordering(679) 00:12:28.725 fused_ordering(680) 00:12:28.725 fused_ordering(681) 00:12:28.725 fused_ordering(682) 00:12:28.725 fused_ordering(683) 00:12:28.725 fused_ordering(684) 00:12:28.725 fused_ordering(685) 00:12:28.725 fused_ordering(686) 00:12:28.725 fused_ordering(687) 00:12:28.725 fused_ordering(688) 00:12:28.725 fused_ordering(689) 00:12:28.725 fused_ordering(690) 00:12:28.725 fused_ordering(691) 00:12:28.725 fused_ordering(692) 00:12:28.725 fused_ordering(693) 00:12:28.725 fused_ordering(694) 00:12:28.725 fused_ordering(695) 00:12:28.725 fused_ordering(696) 00:12:28.725 fused_ordering(697) 00:12:28.725 fused_ordering(698) 00:12:28.725 fused_ordering(699) 00:12:28.725 fused_ordering(700) 00:12:28.725 fused_ordering(701) 00:12:28.725 fused_ordering(702) 00:12:28.725 fused_ordering(703) 00:12:28.725 fused_ordering(704) 00:12:28.725 fused_ordering(705) 00:12:28.725 fused_ordering(706) 00:12:28.725 fused_ordering(707) 00:12:28.725 fused_ordering(708) 00:12:28.725 fused_ordering(709) 00:12:28.726 fused_ordering(710) 00:12:28.726 fused_ordering(711) 00:12:28.726 fused_ordering(712) 00:12:28.726 fused_ordering(713) 00:12:28.726 fused_ordering(714) 00:12:28.726 fused_ordering(715) 00:12:28.726 fused_ordering(716) 00:12:28.726 fused_ordering(717) 00:12:28.726 fused_ordering(718) 00:12:28.726 fused_ordering(719) 00:12:28.726 fused_ordering(720) 00:12:28.726 fused_ordering(721) 00:12:28.726 fused_ordering(722) 00:12:28.726 fused_ordering(723) 00:12:28.726 fused_ordering(724) 00:12:28.726 fused_ordering(725) 00:12:28.726 fused_ordering(726) 00:12:28.726 fused_ordering(727) 00:12:28.726 fused_ordering(728) 00:12:28.726 fused_ordering(729) 00:12:28.726 fused_ordering(730) 00:12:28.726 fused_ordering(731) 00:12:28.726 fused_ordering(732) 00:12:28.726 fused_ordering(733) 00:12:28.726 fused_ordering(734) 00:12:28.726 fused_ordering(735) 00:12:28.726 fused_ordering(736) 00:12:28.726 fused_ordering(737) 00:12:28.726 fused_ordering(738) 00:12:28.726 fused_ordering(739) 00:12:28.726 fused_ordering(740) 00:12:28.726 fused_ordering(741) 00:12:28.726 fused_ordering(742) 00:12:28.726 fused_ordering(743) 00:12:28.726 fused_ordering(744) 00:12:28.726 fused_ordering(745) 00:12:28.726 fused_ordering(746) 00:12:28.726 fused_ordering(747) 00:12:28.726 fused_ordering(748) 00:12:28.726 fused_ordering(749) 00:12:28.726 fused_ordering(750) 00:12:28.726 fused_ordering(751) 00:12:28.726 fused_ordering(752) 00:12:28.726 fused_ordering(753) 00:12:28.726 fused_ordering(754) 00:12:28.726 fused_ordering(755) 00:12:28.726 fused_ordering(756) 00:12:28.726 fused_ordering(757) 00:12:28.726 fused_ordering(758) 00:12:28.726 fused_ordering(759) 00:12:28.726 fused_ordering(760) 00:12:28.726 fused_ordering(761) 00:12:28.726 fused_ordering(762) 00:12:28.726 fused_ordering(763) 00:12:28.726 fused_ordering(764) 00:12:28.726 fused_ordering(765) 00:12:28.726 fused_ordering(766) 00:12:28.726 fused_ordering(767) 00:12:28.726 fused_ordering(768) 00:12:28.726 fused_ordering(769) 00:12:28.726 fused_ordering(770) 00:12:28.726 fused_ordering(771) 00:12:28.726 fused_ordering(772) 00:12:28.726 fused_ordering(773) 00:12:28.726 fused_ordering(774) 00:12:28.726 fused_ordering(775) 00:12:28.726 fused_ordering(776) 00:12:28.726 fused_ordering(777) 00:12:28.726 fused_ordering(778) 00:12:28.726 fused_ordering(779) 00:12:28.726 fused_ordering(780) 00:12:28.726 fused_ordering(781) 00:12:28.726 fused_ordering(782) 00:12:28.726 fused_ordering(783) 00:12:28.726 fused_ordering(784) 00:12:28.726 fused_ordering(785) 00:12:28.726 fused_ordering(786) 00:12:28.726 fused_ordering(787) 00:12:28.726 fused_ordering(788) 00:12:28.726 fused_ordering(789) 00:12:28.726 fused_ordering(790) 00:12:28.726 fused_ordering(791) 00:12:28.726 fused_ordering(792) 00:12:28.726 fused_ordering(793) 00:12:28.726 fused_ordering(794) 00:12:28.726 fused_ordering(795) 00:12:28.726 fused_ordering(796) 00:12:28.726 fused_ordering(797) 00:12:28.726 fused_ordering(798) 00:12:28.726 fused_ordering(799) 00:12:28.726 fused_ordering(800) 00:12:28.726 fused_ordering(801) 00:12:28.726 fused_ordering(802) 00:12:28.726 fused_ordering(803) 00:12:28.726 fused_ordering(804) 00:12:28.726 fused_ordering(805) 00:12:28.726 fused_ordering(806) 00:12:28.726 fused_ordering(807) 00:12:28.726 fused_ordering(808) 00:12:28.726 fused_ordering(809) 00:12:28.726 fused_ordering(810) 00:12:28.726 fused_ordering(811) 00:12:28.726 fused_ordering(812) 00:12:28.726 fused_ordering(813) 00:12:28.726 fused_ordering(814) 00:12:28.726 fused_ordering(815) 00:12:28.726 fused_ordering(816) 00:12:28.726 fused_ordering(817) 00:12:28.726 fused_ordering(818) 00:12:28.726 fused_ordering(819) 00:12:28.726 fused_ordering(820) 00:12:28.987 fused_ordering(821) 00:12:28.987 fused_ordering(822) 00:12:28.987 fused_ordering(823) 00:12:28.987 fused_ordering(824) 00:12:28.987 fused_ordering(825) 00:12:28.987 fused_ordering(826) 00:12:28.987 fused_ordering(827) 00:12:28.987 fused_ordering(828) 00:12:28.987 fused_ordering(829) 00:12:28.987 fused_ordering(830) 00:12:28.987 fused_ordering(831) 00:12:28.987 fused_ordering(832) 00:12:28.987 fused_ordering(833) 00:12:28.987 fused_ordering(834) 00:12:28.987 fused_ordering(835) 00:12:28.987 fused_ordering(836) 00:12:28.987 fused_ordering(837) 00:12:28.987 fused_ordering(838) 00:12:28.987 fused_ordering(839) 00:12:28.987 fused_ordering(840) 00:12:28.987 fused_ordering(841) 00:12:28.987 fused_ordering(842) 00:12:28.987 fused_ordering(843) 00:12:28.987 fused_ordering(844) 00:12:28.987 fused_ordering(845) 00:12:28.987 fused_ordering(846) 00:12:28.987 fused_ordering(847) 00:12:28.987 fused_ordering(848) 00:12:28.987 fused_ordering(849) 00:12:28.987 fused_ordering(850) 00:12:28.987 fused_ordering(851) 00:12:28.987 fused_ordering(852) 00:12:28.987 fused_ordering(853) 00:12:28.987 fused_ordering(854) 00:12:28.987 fused_ordering(855) 00:12:28.987 fused_ordering(856) 00:12:28.987 fused_ordering(857) 00:12:28.987 fused_ordering(858) 00:12:28.987 fused_ordering(859) 00:12:28.987 fused_ordering(860) 00:12:28.987 fused_ordering(861) 00:12:28.987 fused_ordering(862) 00:12:28.987 fused_ordering(863) 00:12:28.987 fused_ordering(864) 00:12:28.987 fused_ordering(865) 00:12:28.987 fused_ordering(866) 00:12:28.987 fused_ordering(867) 00:12:28.987 fused_ordering(868) 00:12:28.987 fused_ordering(869) 00:12:28.987 fused_ordering(870) 00:12:28.987 fused_ordering(871) 00:12:28.987 fused_ordering(872) 00:12:28.987 fused_ordering(873) 00:12:28.987 fused_ordering(874) 00:12:28.987 fused_ordering(875) 00:12:28.987 fused_ordering(876) 00:12:28.987 fused_ordering(877) 00:12:28.987 fused_ordering(878) 00:12:28.987 fused_ordering(879) 00:12:28.987 fused_ordering(880) 00:12:28.987 fused_ordering(881) 00:12:28.987 fused_ordering(882) 00:12:28.987 fused_ordering(883) 00:12:28.987 fused_ordering(884) 00:12:28.987 fused_ordering(885) 00:12:28.987 fused_ordering(886) 00:12:28.987 fused_ordering(887) 00:12:28.987 fused_ordering(888) 00:12:28.987 fused_ordering(889) 00:12:28.987 fused_ordering(890) 00:12:28.987 fused_ordering(891) 00:12:28.987 fused_ordering(892) 00:12:28.987 fused_ordering(893) 00:12:28.987 fused_ordering(894) 00:12:28.987 fused_ordering(895) 00:12:28.987 fused_ordering(896) 00:12:28.987 fused_ordering(897) 00:12:28.987 fused_ordering(898) 00:12:28.987 fused_ordering(899) 00:12:28.987 fused_ordering(900) 00:12:28.987 fused_ordering(901) 00:12:28.987 fused_ordering(902) 00:12:28.988 fused_ordering(903) 00:12:28.988 fused_ordering(904) 00:12:28.988 fused_ordering(905) 00:12:28.988 fused_ordering(906) 00:12:28.988 fused_ordering(907) 00:12:28.988 fused_ordering(908) 00:12:28.988 fused_ordering(909) 00:12:28.988 fused_ordering(910) 00:12:28.988 fused_ordering(911) 00:12:28.988 fused_ordering(912) 00:12:28.988 fused_ordering(913) 00:12:28.988 fused_ordering(914) 00:12:28.988 fused_ordering(915) 00:12:28.988 fused_ordering(916) 00:12:28.988 fused_ordering(917) 00:12:28.988 fused_ordering(918) 00:12:28.988 fused_ordering(919) 00:12:28.988 fused_ordering(920) 00:12:28.988 fused_ordering(921) 00:12:28.988 fused_ordering(922) 00:12:28.988 fused_ordering(923) 00:12:28.988 fused_ordering(924) 00:12:28.988 fused_ordering(925) 00:12:28.988 fused_ordering(926) 00:12:28.988 fused_ordering(927) 00:12:28.988 fused_ordering(928) 00:12:28.988 fused_ordering(929) 00:12:28.988 fused_ordering(930) 00:12:28.988 fused_ordering(931) 00:12:28.988 fused_ordering(932) 00:12:28.988 fused_ordering(933) 00:12:28.988 fused_ordering(934) 00:12:28.988 fused_ordering(935) 00:12:28.988 fused_ordering(936) 00:12:28.988 fused_ordering(937) 00:12:28.988 fused_ordering(938) 00:12:28.988 fused_ordering(939) 00:12:28.988 fused_ordering(940) 00:12:28.988 fused_ordering(941) 00:12:28.988 fused_ordering(942) 00:12:28.988 fused_ordering(943) 00:12:28.988 fused_ordering(944) 00:12:28.988 fused_ordering(945) 00:12:28.988 fused_ordering(946) 00:12:28.988 fused_ordering(947) 00:12:28.988 fused_ordering(948) 00:12:28.988 fused_ordering(949) 00:12:28.988 fused_ordering(950) 00:12:28.988 fused_ordering(951) 00:12:28.988 fused_ordering(952) 00:12:28.988 fused_ordering(953) 00:12:28.988 fused_ordering(954) 00:12:28.988 fused_ordering(955) 00:12:28.988 fused_ordering(956) 00:12:28.988 fused_ordering(957) 00:12:28.988 fused_ordering(958) 00:12:28.988 fused_ordering(959) 00:12:28.988 fused_ordering(960) 00:12:28.988 fused_ordering(961) 00:12:28.988 fused_ordering(962) 00:12:28.988 fused_ordering(963) 00:12:28.988 fused_ordering(964) 00:12:28.988 fused_ordering(965) 00:12:28.988 fused_ordering(966) 00:12:28.988 fused_ordering(967) 00:12:28.988 fused_ordering(968) 00:12:28.988 fused_ordering(969) 00:12:28.988 fused_ordering(970) 00:12:28.988 fused_ordering(971) 00:12:28.988 fused_ordering(972) 00:12:28.988 fused_ordering(973) 00:12:28.988 fused_ordering(974) 00:12:28.988 fused_ordering(975) 00:12:28.988 fused_ordering(976) 00:12:28.988 fused_ordering(977) 00:12:28.988 fused_ordering(978) 00:12:28.988 fused_ordering(979) 00:12:28.988 fused_ordering(980) 00:12:28.988 fused_ordering(981) 00:12:28.988 fused_ordering(982) 00:12:28.988 fused_ordering(983) 00:12:28.988 fused_ordering(984) 00:12:28.988 fused_ordering(985) 00:12:28.988 fused_ordering(986) 00:12:28.988 fused_ordering(987) 00:12:28.988 fused_ordering(988) 00:12:28.988 fused_ordering(989) 00:12:28.988 fused_ordering(990) 00:12:28.988 fused_ordering(991) 00:12:28.988 fused_ordering(992) 00:12:28.988 fused_ordering(993) 00:12:28.988 fused_ordering(994) 00:12:28.988 fused_ordering(995) 00:12:28.988 fused_ordering(996) 00:12:28.988 fused_ordering(997) 00:12:28.988 fused_ordering(998) 00:12:28.988 fused_ordering(999) 00:12:28.988 fused_ordering(1000) 00:12:28.988 fused_ordering(1001) 00:12:28.988 fused_ordering(1002) 00:12:28.988 fused_ordering(1003) 00:12:28.988 fused_ordering(1004) 00:12:28.988 fused_ordering(1005) 00:12:28.988 fused_ordering(1006) 00:12:28.988 fused_ordering(1007) 00:12:28.988 fused_ordering(1008) 00:12:28.988 fused_ordering(1009) 00:12:28.988 fused_ordering(1010) 00:12:28.988 fused_ordering(1011) 00:12:28.988 fused_ordering(1012) 00:12:28.988 fused_ordering(1013) 00:12:28.988 fused_ordering(1014) 00:12:28.988 fused_ordering(1015) 00:12:28.988 fused_ordering(1016) 00:12:28.988 fused_ordering(1017) 00:12:28.988 fused_ordering(1018) 00:12:28.988 fused_ordering(1019) 00:12:28.988 fused_ordering(1020) 00:12:28.988 fused_ordering(1021) 00:12:28.988 fused_ordering(1022) 00:12:28.988 fused_ordering(1023) 00:12:28.988 10:20:06 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:28.988 10:20:06 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:28.988 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:28.988 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:28.988 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:28.988 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:28.988 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:28.988 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.988 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:28.988 rmmod nvme_rdma 00:12:29.321 rmmod nvme_fabrics 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2833951 ']' 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2833951 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 2833951 ']' 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 2833951 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2833951 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2833951' 00:12:29.321 killing process with pid 2833951 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 2833951 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 2833951 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:29.321 00:12:29.321 real 0m10.407s 00:12:29.321 user 0m5.458s 00:12:29.321 sys 0m6.320s 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.321 10:20:06 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:29.321 ************************************ 00:12:29.321 END TEST nvmf_fused_ordering 00:12:29.321 ************************************ 00:12:29.616 10:20:06 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:29.616 10:20:06 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:12:29.616 10:20:06 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:29.616 10:20:06 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.616 10:20:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:29.616 ************************************ 00:12:29.616 START TEST nvmf_delete_subsystem 00:12:29.616 ************************************ 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:12:29.616 * Looking for test storage... 00:12:29.616 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.616 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.617 10:20:06 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:37.776 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:37.776 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:37.776 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:37.777 Found net devices under 0000:98:00.0: mlx_0_0 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:37.777 Found net devices under 0000:98:00.1: mlx_0_1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:37.777 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:37.777 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:12:37.777 altname enp152s0f0np0 00:12:37.777 altname ens817f0np0 00:12:37.777 inet 192.168.100.8/24 scope global mlx_0_0 00:12:37.777 valid_lft forever preferred_lft forever 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:37.777 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:37.777 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:12:37.777 altname enp152s0f1np1 00:12:37.777 altname ens817f1np1 00:12:37.777 inet 192.168.100.9/24 scope global mlx_0_1 00:12:37.777 valid_lft forever preferred_lft forever 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:37.777 192.168.100.9' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:37.777 192.168.100.9' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:37.777 192.168.100.9' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2838644 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2838644 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 2838644 ']' 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.777 10:20:14 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.777 [2024-07-15 10:20:14.775964] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:37.777 [2024-07-15 10:20:14.776033] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.777 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.777 [2024-07-15 10:20:14.846444] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:37.777 [2024-07-15 10:20:14.919892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.777 [2024-07-15 10:20:14.919933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.777 [2024-07-15 10:20:14.919941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.777 [2024-07-15 10:20:14.919947] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.777 [2024-07-15 10:20:14.919953] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.777 [2024-07-15 10:20:14.920017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.777 [2024-07-15 10:20:14.920019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.349 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.349 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:38.349 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:38.349 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:38.349 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.609 [2024-07-15 10:20:15.617154] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d2bb70/0x1d30060) succeed. 00:12:38.609 [2024-07-15 10:20:15.631142] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d2d070/0x1d716f0) succeed. 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.609 [2024-07-15 10:20:15.717161] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.609 NULL1 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.609 Delay0 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2838916 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:38.609 10:20:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:38.609 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.869 [2024-07-15 10:20:15.827991] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:40.780 10:20:17 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.780 10:20:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:40.780 10:20:17 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:41.721 NVMe io qpair process completion error 00:12:41.721 NVMe io qpair process completion error 00:12:41.721 NVMe io qpair process completion error 00:12:41.721 NVMe io qpair process completion error 00:12:41.721 NVMe io qpair process completion error 00:12:41.721 NVMe io qpair process completion error 00:12:41.981 10:20:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.981 10:20:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:41.981 10:20:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2838916 00:12:41.981 10:20:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:42.242 10:20:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:42.242 10:20:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2838916 00:12:42.242 10:20:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Read completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.814 Write completed with error (sct=0, sc=8) 00:12:42.814 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 starting I/O failed: -6 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Read completed with error (sct=0, sc=8) 00:12:42.815 Write completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Read completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Write completed with error (sct=0, sc=8) 00:12:42.816 Initializing NVMe Controllers 00:12:42.816 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:42.816 Controller IO queue size 128, less than required. 00:12:42.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:42.816 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:42.816 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:42.816 Initialization complete. Launching workers. 00:12:42.816 ======================================================== 00:12:42.816 Latency(us) 00:12:42.816 Device Information : IOPS MiB/s Average min max 00:12:42.816 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.66 0.04 1591301.01 1000071.02 2968187.22 00:12:42.816 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.66 0.04 1592785.41 1001771.33 2969567.16 00:12:42.816 ======================================================== 00:12:42.816 Total : 161.32 0.08 1592043.21 1000071.02 2969567.16 00:12:42.816 00:12:42.816 [2024-07-15 10:20:19.935873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:12:42.816 [2024-07-15 10:20:19.935904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:12:42.816 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:42.816 10:20:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:42.816 10:20:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2838916 00:12:42.816 10:20:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2838916 00:12:43.387 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2838916) - No such process 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2838916 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2838916 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2838916 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:43.387 [2024-07-15 10:20:20.466459] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2839731 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:43.387 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:43.387 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.387 [2024-07-15 10:20:20.561402] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:43.958 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:43.958 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:43.958 10:20:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:44.528 10:20:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:44.528 10:20:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:44.528 10:20:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:45.099 10:20:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:45.099 10:20:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:45.099 10:20:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:45.360 10:20:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:45.360 10:20:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:45.360 10:20:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:45.931 10:20:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:45.931 10:20:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:45.931 10:20:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:46.504 10:20:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:46.504 10:20:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:46.504 10:20:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:47.075 10:20:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:47.075 10:20:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:47.075 10:20:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:47.335 10:20:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:47.335 10:20:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:47.335 10:20:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:47.907 10:20:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:47.907 10:20:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:47.907 10:20:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:48.478 10:20:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:48.478 10:20:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:48.478 10:20:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:49.051 10:20:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:49.051 10:20:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:49.051 10:20:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:49.622 10:20:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:49.622 10:20:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:49.622 10:20:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:49.883 10:20:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:49.883 10:20:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:49.883 10:20:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:50.453 10:20:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:50.453 10:20:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:50.453 10:20:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:50.714 Initializing NVMe Controllers 00:12:50.714 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:50.714 Controller IO queue size 128, less than required. 00:12:50.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:50.714 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:50.714 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:50.714 Initialization complete. Launching workers. 00:12:50.714 ======================================================== 00:12:50.714 Latency(us) 00:12:50.714 Device Information : IOPS MiB/s Average min max 00:12:50.714 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001071.82 1000052.58 1003207.46 00:12:50.714 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1001665.04 1000046.09 1005587.03 00:12:50.714 ======================================================== 00:12:50.714 Total : 256.00 0.12 1001368.43 1000046.09 1005587.03 00:12:50.714 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2839731 00:12:50.975 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2839731) - No such process 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2839731 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:50.975 rmmod nvme_rdma 00:12:50.975 rmmod nvme_fabrics 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2838644 ']' 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2838644 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 2838644 ']' 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 2838644 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2838644 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2838644' 00:12:50.975 killing process with pid 2838644 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 2838644 00:12:50.975 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 2838644 00:12:51.236 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:51.236 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:51.236 00:12:51.236 real 0m21.807s 00:12:51.236 user 0m50.395s 00:12:51.236 sys 0m7.029s 00:12:51.236 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:51.236 10:20:28 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:51.236 ************************************ 00:12:51.236 END TEST nvmf_delete_subsystem 00:12:51.236 ************************************ 00:12:51.236 10:20:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:12:51.236 10:20:28 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:12:51.236 10:20:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:51.236 10:20:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:51.236 10:20:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:51.236 ************************************ 00:12:51.236 START TEST nvmf_ns_masking 00:12:51.236 ************************************ 00:12:51.236 10:20:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:12:51.498 * Looking for test storage... 00:12:51.498 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=35ac2e22-3b51-401b-8554-e23218477862 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=334cb182-d662-4651-84eb-fb1ff927a9ac 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0808266b-b990-4a73-bdc2-63699177d5ab 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:51.498 10:20:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:12:59.658 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:12:59.658 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:12:59.658 Found net devices under 0000:98:00.0: mlx_0_0 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:12:59.658 Found net devices under 0000:98:00.1: mlx_0_1 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:59.658 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.658 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:12:59.658 altname enp152s0f0np0 00:12:59.658 altname ens817f0np0 00:12:59.658 inet 192.168.100.8/24 scope global mlx_0_0 00:12:59.658 valid_lft forever preferred_lft forever 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:59.658 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:59.658 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:59.658 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:12:59.658 altname enp152s0f1np1 00:12:59.658 altname ens817f1np1 00:12:59.658 inet 192.168.100.9/24 scope global mlx_0_1 00:12:59.658 valid_lft forever preferred_lft forever 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:59.659 192.168.100.9' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:59.659 192.168.100.9' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:59.659 192.168.100.9' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2845565 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2845565 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2845565 ']' 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:59.659 10:20:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:59.659 [2024-07-15 10:20:36.574805] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:59.659 [2024-07-15 10:20:36.574875] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.659 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.659 [2024-07-15 10:20:36.645745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.659 [2024-07-15 10:20:36.721064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.659 [2024-07-15 10:20:36.721104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.659 [2024-07-15 10:20:36.721112] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.659 [2024-07-15 10:20:36.721118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.659 [2024-07-15 10:20:36.721124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.659 [2024-07-15 10:20:36.721144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.232 10:20:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:00.232 10:20:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:00.232 10:20:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:00.232 10:20:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:00.232 10:20:37 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:00.232 10:20:37 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:00.232 10:20:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:00.493 [2024-07-15 10:20:37.550349] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x123cf90/0x1241480) succeed. 00:13:00.493 [2024-07-15 10:20:37.564382] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x123e490/0x1282b10) succeed. 00:13:00.493 10:20:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:00.493 10:20:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:00.493 10:20:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:00.754 Malloc1 00:13:00.754 10:20:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:01.016 Malloc2 00:13:01.017 10:20:37 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:01.017 10:20:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:01.278 10:20:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:01.278 [2024-07-15 10:20:38.443724] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:01.278 10:20:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:01.278 10:20:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0808266b-b990-4a73-bdc2-63699177d5ab -a 192.168.100.8 -s 4420 -i 4 00:13:01.849 10:20:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.849 10:20:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:01.849 10:20:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.849 10:20:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:01.849 10:20:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:03.765 [ 0]:0x1 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:03.765 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.026 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c04814cc18048b78e5cd0515e14765c 00:13:04.026 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c04814cc18048b78e5cd0515e14765c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.026 10:20:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:04.026 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:04.026 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:04.026 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.026 [ 0]:0x1 00:13:04.026 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:04.026 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.026 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c04814cc18048b78e5cd0515e14765c 00:13:04.026 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c04814cc18048b78e5cd0515e14765c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.026 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:04.026 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:04.026 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:04.026 [ 1]:0x2 00:13:04.290 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:04.290 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:04.290 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28442e4e8aa842b3b217f6d90979d3c2 00:13:04.290 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28442e4e8aa842b3b217f6d90979d3c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:04.290 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:04.290 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.573 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.834 10:20:41 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:05.095 10:20:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:05.095 10:20:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0808266b-b990-4a73-bdc2-63699177d5ab -a 192.168.100.8 -s 4420 -i 4 00:13:05.356 10:20:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:05.356 10:20:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:05.356 10:20:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.356 10:20:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:05.356 10:20:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:05.356 10:20:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:07.903 [ 0]:0x2 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28442e4e8aa842b3b217f6d90979d3c2 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28442e4e8aa842b3b217f6d90979d3c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:07.903 [ 0]:0x1 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c04814cc18048b78e5cd0515e14765c 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c04814cc18048b78e5cd0515e14765c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:07.903 [ 1]:0x2 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28442e4e8aa842b3b217f6d90979d3c2 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28442e4e8aa842b3b217f6d90979d3c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:07.903 10:20:44 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:07.903 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:07.903 10:20:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:07.903 10:20:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:07.903 10:20:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:07.903 10:20:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:07.903 10:20:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:07.903 10:20:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:07.903 10:20:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:07.903 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.165 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:08.165 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:08.165 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.165 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:08.165 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.165 10:20:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:08.165 10:20:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:08.165 10:20:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:08.166 10:20:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:08.166 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:08.166 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:08.166 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:08.166 [ 0]:0x2 00:13:08.166 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:08.166 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:08.166 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28442e4e8aa842b3b217f6d90979d3c2 00:13:08.166 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28442e4e8aa842b3b217f6d90979d3c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:08.166 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:08.166 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:08.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.426 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:08.686 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:08.686 10:20:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0808266b-b990-4a73-bdc2-63699177d5ab -a 192.168.100.8 -s 4420 -i 4 00:13:09.257 10:20:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:09.257 10:20:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:09.257 10:20:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:09.257 10:20:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:09.257 10:20:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:09.257 10:20:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:11.168 [ 0]:0x1 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.168 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6c04814cc18048b78e5cd0515e14765c 00:13:11.169 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6c04814cc18048b78e5cd0515e14765c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.169 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:11.169 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.169 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:11.169 [ 1]:0x2 00:13:11.169 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:11.169 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.169 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28442e4e8aa842b3b217f6d90979d3c2 00:13:11.169 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28442e4e8aa842b3b217f6d90979d3c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.169 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:11.429 [ 0]:0x2 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:11.429 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.689 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28442e4e8aa842b3b217f6d90979d3c2 00:13:11.689 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28442e4e8aa842b3b217f6d90979d3c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.689 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:11.690 [2024-07-15 10:20:48.796570] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:11.690 request: 00:13:11.690 { 00:13:11.690 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.690 "nsid": 2, 00:13:11.690 "host": "nqn.2016-06.io.spdk:host1", 00:13:11.690 "method": "nvmf_ns_remove_host", 00:13:11.690 "req_id": 1 00:13:11.690 } 00:13:11.690 Got JSON-RPC error response 00:13:11.690 response: 00:13:11.690 { 00:13:11.690 "code": -32602, 00:13:11.690 "message": "Invalid parameters" 00:13:11.690 } 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:11.690 [ 0]:0x2 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:11.690 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:11.950 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=28442e4e8aa842b3b217f6d90979d3c2 00:13:11.950 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 28442e4e8aa842b3b217f6d90979d3c2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:11.950 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:11.950 10:20:48 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.210 10:20:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2848240 00:13:12.210 10:20:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.210 10:20:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:12.210 10:20:49 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2848240 /var/tmp/host.sock 00:13:12.210 10:20:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 2848240 ']' 00:13:12.210 10:20:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:12.210 10:20:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:12.210 10:20:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:12.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:12.210 10:20:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:12.210 10:20:49 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:12.210 [2024-07-15 10:20:49.391095] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:12.210 [2024-07-15 10:20:49.391147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848240 ] 00:13:12.470 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.470 [2024-07-15 10:20:49.451014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.470 [2024-07-15 10:20:49.505272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.043 10:20:50 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.043 10:20:50 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:13:13.043 10:20:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.303 10:20:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:13.303 10:20:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 35ac2e22-3b51-401b-8554-e23218477862 00:13:13.303 10:20:50 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:13.303 10:20:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 35AC2E223B51401B8554E23218477862 -i 00:13:13.564 10:20:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 334cb182-d662-4651-84eb-fb1ff927a9ac 00:13:13.564 10:20:50 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:13:13.564 10:20:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 334CB182D662465184EBFB1FF927A9AC -i 00:13:13.824 10:20:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:13.824 10:20:50 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:14.085 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:14.085 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:14.360 nvme0n1 00:13:14.360 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:14.360 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:14.360 nvme1n2 00:13:14.621 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:14.621 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:14.621 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:14.621 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:14.621 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:14.621 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:14.621 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:14.621 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:14.621 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:14.882 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 35ac2e22-3b51-401b-8554-e23218477862 == \3\5\a\c\2\e\2\2\-\3\b\5\1\-\4\0\1\b\-\8\5\5\4\-\e\2\3\2\1\8\4\7\7\8\6\2 ]] 00:13:14.882 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:14.882 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:14.882 10:20:51 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:14.882 10:20:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 334cb182-d662-4651-84eb-fb1ff927a9ac == \3\3\4\c\b\1\8\2\-\d\6\6\2\-\4\6\5\1\-\8\4\e\b\-\f\b\1\f\f\9\2\7\a\9\a\c ]] 00:13:14.883 10:20:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2848240 00:13:14.883 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2848240 ']' 00:13:14.883 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2848240 00:13:14.883 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:14.883 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:14.883 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2848240 00:13:14.883 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:14.883 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:14.883 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2848240' 00:13:14.883 killing process with pid 2848240 00:13:14.883 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2848240 00:13:14.883 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2848240 00:13:15.144 10:20:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:15.404 rmmod nvme_rdma 00:13:15.404 rmmod nvme_fabrics 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2845565 ']' 00:13:15.404 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2845565 00:13:15.405 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 2845565 ']' 00:13:15.405 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 2845565 00:13:15.405 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:13:15.405 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:15.405 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2845565 00:13:15.405 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:15.405 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:15.405 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2845565' 00:13:15.405 killing process with pid 2845565 00:13:15.405 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 2845565 00:13:15.405 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 2845565 00:13:15.665 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:15.665 10:20:52 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:15.665 00:13:15.665 real 0m24.341s 00:13:15.665 user 0m25.813s 00:13:15.665 sys 0m7.597s 00:13:15.665 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.665 10:20:52 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:15.665 ************************************ 00:13:15.665 END TEST nvmf_ns_masking 00:13:15.665 ************************************ 00:13:15.665 10:20:52 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:15.665 10:20:52 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:15.665 10:20:52 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:15.665 10:20:52 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:15.665 10:20:52 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.665 10:20:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:15.665 ************************************ 00:13:15.665 START TEST nvmf_nvme_cli 00:13:15.665 ************************************ 00:13:15.665 10:20:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:13:15.925 * Looking for test storage... 00:13:15.925 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:15.925 10:20:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:15.926 10:20:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.058 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.058 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:24.058 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:24.058 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:24.058 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:24.058 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:24.058 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:24.058 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:24.059 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:24.059 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:24.059 Found net devices under 0000:98:00.0: mlx_0_0 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:24.059 Found net devices under 0000:98:00.1: mlx_0_1 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:24.059 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:24.059 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:13:24.059 altname enp152s0f0np0 00:13:24.059 altname ens817f0np0 00:13:24.059 inet 192.168.100.8/24 scope global mlx_0_0 00:13:24.059 valid_lft forever preferred_lft forever 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:24.059 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:24.059 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:13:24.059 altname enp152s0f1np1 00:13:24.059 altname ens817f1np1 00:13:24.059 inet 192.168.100.9/24 scope global mlx_0_1 00:13:24.059 valid_lft forever preferred_lft forever 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.059 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:24.060 192.168.100.9' 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:24.060 192.168.100.9' 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:24.060 192.168.100.9' 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2853282 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2853282 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 2853282 ']' 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:24.060 10:21:00 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.060 [2024-07-15 10:21:01.018654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:24.060 [2024-07-15 10:21:01.018721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.060 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.060 [2024-07-15 10:21:01.094289] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:24.060 [2024-07-15 10:21:01.172042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.060 [2024-07-15 10:21:01.172083] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.060 [2024-07-15 10:21:01.172094] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.060 [2024-07-15 10:21:01.172100] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.060 [2024-07-15 10:21:01.172106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.060 [2024-07-15 10:21:01.172263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.060 [2024-07-15 10:21:01.172482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.060 [2024-07-15 10:21:01.172484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:24.060 [2024-07-15 10:21:01.172339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:24.631 10:21:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:24.631 10:21:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:13:24.631 10:21:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:24.631 10:21:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:24.631 10:21:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 10:21:01 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.891 10:21:01 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:24.891 10:21:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.891 10:21:01 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 [2024-07-15 10:21:01.882623] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fd0200/0x1fd46f0) succeed. 00:13:24.891 [2024-07-15 10:21:01.897215] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fd1840/0x2015d80) succeed. 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 Malloc0 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 Malloc1 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:24.891 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.151 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.151 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:25.151 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.151 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.152 [2024-07-15 10:21:02.103660] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -a 192.168.100.8 -s 4420 00:13:25.152 00:13:25.152 Discovery Log Number of Records 2, Generation counter 2 00:13:25.152 =====Discovery Log Entry 0====== 00:13:25.152 trtype: rdma 00:13:25.152 adrfam: ipv4 00:13:25.152 subtype: current discovery subsystem 00:13:25.152 treq: not required 00:13:25.152 portid: 0 00:13:25.152 trsvcid: 4420 00:13:25.152 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:25.152 traddr: 192.168.100.8 00:13:25.152 eflags: explicit discovery connections, duplicate discovery information 00:13:25.152 rdma_prtype: not specified 00:13:25.152 rdma_qptype: connected 00:13:25.152 rdma_cms: rdma-cm 00:13:25.152 rdma_pkey: 0x0000 00:13:25.152 =====Discovery Log Entry 1====== 00:13:25.152 trtype: rdma 00:13:25.152 adrfam: ipv4 00:13:25.152 subtype: nvme subsystem 00:13:25.152 treq: not required 00:13:25.152 portid: 0 00:13:25.152 trsvcid: 4420 00:13:25.152 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:25.152 traddr: 192.168.100.8 00:13:25.152 eflags: none 00:13:25.152 rdma_prtype: not specified 00:13:25.152 rdma_qptype: connected 00:13:25.152 rdma_cms: rdma-cm 00:13:25.152 rdma_pkey: 0x0000 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:25.152 10:21:02 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:26.535 10:21:03 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:26.535 10:21:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:26.535 10:21:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:26.535 10:21:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:26.535 10:21:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:26.535 10:21:03 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:29.081 10:21:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:29.081 10:21:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:29.082 /dev/nvme0n1 ]] 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:29.082 10:21:05 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:30.027 rmmod nvme_rdma 00:13:30.027 rmmod nvme_fabrics 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2853282 ']' 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2853282 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 2853282 ']' 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 2853282 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2853282 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2853282' 00:13:30.027 killing process with pid 2853282 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 2853282 00:13:30.027 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 2853282 00:13:30.288 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:30.288 10:21:07 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:30.288 00:13:30.288 real 0m14.584s 00:13:30.288 user 0m27.197s 00:13:30.288 sys 0m6.458s 00:13:30.288 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.288 10:21:07 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:30.288 ************************************ 00:13:30.288 END TEST nvmf_nvme_cli 00:13:30.288 ************************************ 00:13:30.288 10:21:07 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:30.288 10:21:07 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:13:30.288 10:21:07 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:13:30.288 10:21:07 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.288 10:21:07 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.288 10:21:07 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:30.550 ************************************ 00:13:30.550 START TEST nvmf_host_management 00:13:30.550 ************************************ 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:13:30.550 * Looking for test storage... 00:13:30.550 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.550 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.551 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.551 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.551 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.551 10:21:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.551 10:21:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.551 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:30.551 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:30.551 10:21:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.551 10:21:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:38.701 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:38.701 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.701 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:38.702 Found net devices under 0000:98:00.0: mlx_0_0 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:38.702 Found net devices under 0000:98:00.1: mlx_0_1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:38.702 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:38.702 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:13:38.702 altname enp152s0f0np0 00:13:38.702 altname ens817f0np0 00:13:38.702 inet 192.168.100.8/24 scope global mlx_0_0 00:13:38.702 valid_lft forever preferred_lft forever 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:38.702 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:38.702 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:13:38.702 altname enp152s0f1np1 00:13:38.702 altname ens817f1np1 00:13:38.702 inet 192.168.100.9/24 scope global mlx_0_1 00:13:38.702 valid_lft forever preferred_lft forever 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:38.702 192.168.100.9' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:38.702 192.168.100.9' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:38.702 192.168.100.9' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:38.702 10:21:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:38.703 10:21:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:38.703 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2859235 00:13:38.703 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2859235 00:13:38.703 10:21:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2859235 ']' 00:13:38.703 10:21:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.703 10:21:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.703 10:21:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.703 10:21:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.703 10:21:15 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:38.703 10:21:15 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:38.963 [2024-07-15 10:21:15.930290] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:38.963 [2024-07-15 10:21:15.930360] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.963 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.963 [2024-07-15 10:21:16.020345] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.963 [2024-07-15 10:21:16.115910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.963 [2024-07-15 10:21:16.115974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.963 [2024-07-15 10:21:16.115983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.963 [2024-07-15 10:21:16.115989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.963 [2024-07-15 10:21:16.115995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.963 [2024-07-15 10:21:16.116129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.963 [2024-07-15 10:21:16.116295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.963 [2024-07-15 10:21:16.116468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:38.963 [2024-07-15 10:21:16.116469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.534 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:39.534 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:39.534 10:21:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:39.534 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.534 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.796 [2024-07-15 10:21:16.779638] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8076b0/0x80bba0) succeed. 00:13:39.796 [2024-07-15 10:21:16.793465] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x808cf0/0x84d230) succeed. 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:39.796 Malloc0 00:13:39.796 [2024-07-15 10:21:16.973306] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:39.796 10:21:16 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2859601 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2859601 /var/tmp/bdevperf.sock 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 2859601 ']' 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:40.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:40.057 { 00:13:40.057 "params": { 00:13:40.057 "name": "Nvme$subsystem", 00:13:40.057 "trtype": "$TEST_TRANSPORT", 00:13:40.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:40.057 "adrfam": "ipv4", 00:13:40.057 "trsvcid": "$NVMF_PORT", 00:13:40.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:40.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:40.057 "hdgst": ${hdgst:-false}, 00:13:40.057 "ddgst": ${ddgst:-false} 00:13:40.057 }, 00:13:40.057 "method": "bdev_nvme_attach_controller" 00:13:40.057 } 00:13:40.057 EOF 00:13:40.057 )") 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:40.057 10:21:17 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:40.057 "params": { 00:13:40.057 "name": "Nvme0", 00:13:40.057 "trtype": "rdma", 00:13:40.057 "traddr": "192.168.100.8", 00:13:40.057 "adrfam": "ipv4", 00:13:40.057 "trsvcid": "4420", 00:13:40.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:40.057 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:40.057 "hdgst": false, 00:13:40.057 "ddgst": false 00:13:40.057 }, 00:13:40.057 "method": "bdev_nvme_attach_controller" 00:13:40.057 }' 00:13:40.057 [2024-07-15 10:21:17.074661] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:40.057 [2024-07-15 10:21:17.074712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859601 ] 00:13:40.057 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.057 [2024-07-15 10:21:17.141379] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.057 [2024-07-15 10:21:17.206206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.318 Running I/O for 10 seconds... 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1263 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1263 -ge 100 ']' 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.890 10:21:17 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:41.834 [2024-07-15 10:21:18.957442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:47232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182600 00:13:41.834 [2024-07-15 10:21:18.957716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182100 00:13:41.834 [2024-07-15 10:21:18.957733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182100 00:13:41.834 [2024-07-15 10:21:18.957750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:48640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182100 00:13:41.834 [2024-07-15 10:21:18.957766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:48768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182100 00:13:41.834 [2024-07-15 10:21:18.957784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182100 00:13:41.834 [2024-07-15 10:21:18.957801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182500 00:13:41.834 [2024-07-15 10:21:18.957818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134c7000 len:0x10000 key:0x182400 00:13:41.834 [2024-07-15 10:21:18.957835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.834 [2024-07-15 10:21:18.957844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134a6000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.957851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.957861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.957869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.957879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.957886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.957896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b77b000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.957903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.957913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75a000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.957920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.957930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b739000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.957937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.957946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b718000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.957953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.957964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6f7000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.957971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.957980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6d6000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.957987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.957997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6b5000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b694000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b673000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b652000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b631000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b610000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc61000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc40000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c03f000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c01e000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bffd000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfdc000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbb000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9a000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf79000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf16000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000beb3000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be92000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc1f000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbfe000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbdd000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbbc000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb9b000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7a000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb59000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb38000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb17000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.835 [2024-07-15 10:21:18.958485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000baf6000 len:0x10000 key:0x182400 00:13:41.835 [2024-07-15 10:21:18.958492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.836 [2024-07-15 10:21:18.958501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bad5000 len:0x10000 key:0x182400 00:13:41.836 [2024-07-15 10:21:18.958508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.836 [2024-07-15 10:21:18.958518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab4000 len:0x10000 key:0x182400 00:13:41.836 [2024-07-15 10:21:18.958525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.836 [2024-07-15 10:21:18.958535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba93000 len:0x10000 key:0x182400 00:13:41.836 [2024-07-15 10:21:18.958543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.836 [2024-07-15 10:21:18.958552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba72000 len:0x10000 key:0x182400 00:13:41.836 [2024-07-15 10:21:18.958559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2cfe9000 sqhd:52b0 p:0 m:0 dnr:0 00:13:41.836 [2024-07-15 10:21:18.960753] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:13:41.836 [2024-07-15 10:21:18.961975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:41.836 task offset: 46592 on job bdev=Nvme0n1 fails 00:13:41.836 00:13:41.836 Latency(us) 00:13:41.836 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.836 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:41.836 Job: Nvme0n1 ended in about 1.57 seconds with error 00:13:41.836 Verification LBA range: start 0x0 length 0x400 00:13:41.836 Nvme0n1 : 1.57 854.67 53.42 40.70 0.00 70673.37 2553.17 1013623.47 00:13:41.836 =================================================================================================================== 00:13:41.836 Total : 854.67 53.42 40.70 0.00 70673.37 2553.17 1013623.47 00:13:41.836 [2024-07-15 10:21:18.963990] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2859601 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:41.836 { 00:13:41.836 "params": { 00:13:41.836 "name": "Nvme$subsystem", 00:13:41.836 "trtype": "$TEST_TRANSPORT", 00:13:41.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:41.836 "adrfam": "ipv4", 00:13:41.836 "trsvcid": "$NVMF_PORT", 00:13:41.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:41.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:41.836 "hdgst": ${hdgst:-false}, 00:13:41.836 "ddgst": ${ddgst:-false} 00:13:41.836 }, 00:13:41.836 "method": "bdev_nvme_attach_controller" 00:13:41.836 } 00:13:41.836 EOF 00:13:41.836 )") 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:41.836 10:21:18 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:41.836 "params": { 00:13:41.836 "name": "Nvme0", 00:13:41.836 "trtype": "rdma", 00:13:41.836 "traddr": "192.168.100.8", 00:13:41.836 "adrfam": "ipv4", 00:13:41.836 "trsvcid": "4420", 00:13:41.836 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:41.836 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:41.836 "hdgst": false, 00:13:41.836 "ddgst": false 00:13:41.836 }, 00:13:41.836 "method": "bdev_nvme_attach_controller" 00:13:41.836 }' 00:13:41.836 [2024-07-15 10:21:19.021379] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:41.836 [2024-07-15 10:21:19.021435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2859955 ] 00:13:42.097 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.097 [2024-07-15 10:21:19.087786] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.097 [2024-07-15 10:21:19.152404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.358 Running I/O for 1 seconds... 00:13:43.300 00:13:43.300 Latency(us) 00:13:43.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.300 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:43.300 Verification LBA range: start 0x0 length 0x400 00:13:43.300 Nvme0n1 : 1.01 2526.07 157.88 0.00 0.00 24766.21 624.64 46749.01 00:13:43.300 =================================================================================================================== 00:13:43.300 Total : 2526.07 157.88 0.00 0.00 24766.21 624.64 46749.01 00:13:43.300 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2859601 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:13:43.300 10:21:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:43.300 10:21:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:43.562 rmmod nvme_rdma 00:13:43.562 rmmod nvme_fabrics 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2859235 ']' 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2859235 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 2859235 ']' 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 2859235 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2859235 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2859235' 00:13:43.562 killing process with pid 2859235 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 2859235 00:13:43.562 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 2859235 00:13:43.823 [2024-07-15 10:21:20.786573] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:43.823 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.823 10:21:20 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:43.823 10:21:20 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:43.823 00:13:43.823 real 0m13.300s 00:13:43.823 user 0m24.489s 00:13:43.823 sys 0m6.971s 00:13:43.823 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.823 10:21:20 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:43.823 ************************************ 00:13:43.823 END TEST nvmf_host_management 00:13:43.823 ************************************ 00:13:43.823 10:21:20 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:13:43.823 10:21:20 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:13:43.823 10:21:20 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:43.823 10:21:20 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.823 10:21:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:43.823 ************************************ 00:13:43.823 START TEST nvmf_lvol 00:13:43.823 ************************************ 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:13:43.823 * Looking for test storage... 00:13:43.823 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.823 10:21:20 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.823 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.823 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.823 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.823 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.823 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.823 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:43.823 10:21:21 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.823 10:21:21 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.823 10:21:21 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.823 10:21:21 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.823 10:21:21 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.824 10:21:21 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.085 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:44.085 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:44.085 10:21:21 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.085 10:21:21 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:13:52.225 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:13:52.225 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:13:52.225 Found net devices under 0000:98:00.0: mlx_0_0 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:13:52.225 Found net devices under 0000:98:00.1: mlx_0_1 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:52.225 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:52.225 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:13:52.225 altname enp152s0f0np0 00:13:52.225 altname ens817f0np0 00:13:52.225 inet 192.168.100.8/24 scope global mlx_0_0 00:13:52.225 valid_lft forever preferred_lft forever 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:52.225 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:52.225 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:13:52.225 altname enp152s0f1np1 00:13:52.225 altname ens817f1np1 00:13:52.225 inet 192.168.100.9/24 scope global mlx_0_1 00:13:52.225 valid_lft forever preferred_lft forever 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:52.225 10:21:28 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:52.225 192.168.100.9' 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:52.225 192.168.100.9' 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:52.225 192.168.100.9' 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:13:52.225 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2864634 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2864634 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 2864634 ']' 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.226 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:52.226 [2024-07-15 10:21:29.165795] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:52.226 [2024-07-15 10:21:29.165864] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.226 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.226 [2024-07-15 10:21:29.240189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.226 [2024-07-15 10:21:29.314162] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.226 [2024-07-15 10:21:29.314200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.226 [2024-07-15 10:21:29.314208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.226 [2024-07-15 10:21:29.314214] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.226 [2024-07-15 10:21:29.314220] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.226 [2024-07-15 10:21:29.314335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.226 [2024-07-15 10:21:29.314468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.226 [2024-07-15 10:21:29.314471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.796 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:52.796 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:13:52.796 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.796 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:52.796 10:21:29 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:52.796 10:21:29 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.796 10:21:29 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:53.057 [2024-07-15 10:21:30.161823] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xda9720/0xdadc10) succeed. 00:13:53.057 [2024-07-15 10:21:30.176149] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdaacc0/0xdef2a0) succeed. 00:13:53.318 10:21:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.318 10:21:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:53.318 10:21:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:53.580 10:21:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:53.580 10:21:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:53.841 10:21:30 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:53.841 10:21:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=abb8ba87-b8de-4931-b2b0-5f510318bb5a 00:13:53.841 10:21:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u abb8ba87-b8de-4931-b2b0-5f510318bb5a lvol 20 00:13:54.122 10:21:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=38fc35fc-bf10-4bd6-931e-8c9dae2e1ec4 00:13:54.122 10:21:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:54.422 10:21:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 38fc35fc-bf10-4bd6-931e-8c9dae2e1ec4 00:13:54.422 10:21:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:13:54.684 [2024-07-15 10:21:31.610455] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:54.684 10:21:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:54.684 10:21:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2865019 00:13:54.684 10:21:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:54.684 10:21:31 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:54.684 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.628 10:21:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 38fc35fc-bf10-4bd6-931e-8c9dae2e1ec4 MY_SNAPSHOT 00:13:55.889 10:21:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f8f56902-2a21-48ce-830d-01ce96bbcfd5 00:13:55.889 10:21:32 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 38fc35fc-bf10-4bd6-931e-8c9dae2e1ec4 30 00:13:56.150 10:21:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f8f56902-2a21-48ce-830d-01ce96bbcfd5 MY_CLONE 00:13:56.150 10:21:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f0007496-bf34-4d34-867b-018bcf769aa9 00:13:56.150 10:21:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f0007496-bf34-4d34-867b-018bcf769aa9 00:13:56.410 10:21:33 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2865019 00:14:06.434 Initializing NVMe Controllers 00:14:06.434 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:14:06.434 Controller IO queue size 128, less than required. 00:14:06.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:06.434 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:06.434 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:06.434 Initialization complete. Launching workers. 00:14:06.434 ======================================================== 00:14:06.434 Latency(us) 00:14:06.434 Device Information : IOPS MiB/s Average min max 00:14:06.434 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 23514.70 91.85 5444.21 2205.19 30867.09 00:14:06.434 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 23564.70 92.05 5432.51 2990.83 28078.07 00:14:06.434 ======================================================== 00:14:06.434 Total : 47079.39 183.90 5438.36 2205.19 30867.09 00:14:06.434 00:14:06.434 10:21:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:06.434 10:21:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 38fc35fc-bf10-4bd6-931e-8c9dae2e1ec4 00:14:06.434 10:21:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u abb8ba87-b8de-4931-b2b0-5f510318bb5a 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:06.695 rmmod nvme_rdma 00:14:06.695 rmmod nvme_fabrics 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.695 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2864634 ']' 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2864634 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 2864634 ']' 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 2864634 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2864634 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2864634' 00:14:06.696 killing process with pid 2864634 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 2864634 00:14:06.696 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 2864634 00:14:06.958 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.958 10:21:43 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:06.958 00:14:06.958 real 0m23.114s 00:14:06.958 user 1m10.716s 00:14:06.958 sys 0m6.988s 00:14:06.958 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:06.958 10:21:43 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:06.958 ************************************ 00:14:06.958 END TEST nvmf_lvol 00:14:06.958 ************************************ 00:14:06.958 10:21:44 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:06.958 10:21:44 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:14:06.958 10:21:44 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:06.958 10:21:44 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.958 10:21:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:06.958 ************************************ 00:14:06.958 START TEST nvmf_lvs_grow 00:14:06.958 ************************************ 00:14:06.958 10:21:44 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:14:07.219 * Looking for test storage... 00:14:07.219 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.219 10:21:44 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.365 10:21:51 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:14:15.365 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:14:15.365 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:14:15.365 Found net devices under 0000:98:00.0: mlx_0_0 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:14:15.365 Found net devices under 0000:98:00.1: mlx_0_1 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:15.365 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:15.366 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:15.366 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:14:15.366 altname enp152s0f0np0 00:14:15.366 altname ens817f0np0 00:14:15.366 inet 192.168.100.8/24 scope global mlx_0_0 00:14:15.366 valid_lft forever preferred_lft forever 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:15.366 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:15.366 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:14:15.366 altname enp152s0f1np1 00:14:15.366 altname ens817f1np1 00:14:15.366 inet 192.168.100.9/24 scope global mlx_0_1 00:14:15.366 valid_lft forever preferred_lft forever 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:15.366 192.168.100.9' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:15.366 192.168.100.9' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:15.366 192.168.100.9' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:15.366 10:21:52 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2871702 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2871702 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 2871702 ']' 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.367 10:21:52 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:15.367 [2024-07-15 10:21:52.298667] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:15.367 [2024-07-15 10:21:52.298721] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.367 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.367 [2024-07-15 10:21:52.365448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.367 [2024-07-15 10:21:52.429004] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.367 [2024-07-15 10:21:52.429044] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.367 [2024-07-15 10:21:52.429052] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.367 [2024-07-15 10:21:52.429058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.367 [2024-07-15 10:21:52.429064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.367 [2024-07-15 10:21:52.429091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.939 10:21:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.939 10:21:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:15.939 10:21:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.939 10:21:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.939 10:21:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:15.939 10:21:53 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.939 10:21:53 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:16.200 [2024-07-15 10:21:53.280412] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2407f90/0x240c480) succeed. 00:14:16.200 [2024-07-15 10:21:53.293660] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2409490/0x244db10) succeed. 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:16.200 ************************************ 00:14:16.200 START TEST lvs_grow_clean 00:14:16.200 ************************************ 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:16.200 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:16.462 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:16.462 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:16.462 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:16.462 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:16.722 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:16.722 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:16.722 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:16.722 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:16.722 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:16.722 10:21:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fa2da3d3-3123-4b88-ad74-180694dbc55d lvol 150 00:14:16.982 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8e4a7e1c-8777-4814-8f8e-538be5cbfdc8 00:14:16.982 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:16.982 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:17.243 [2024-07-15 10:21:54.181312] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:17.243 [2024-07-15 10:21:54.181371] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:17.243 true 00:14:17.243 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:17.243 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:17.243 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:17.243 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:17.503 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8e4a7e1c-8777-4814-8f8e-538be5cbfdc8 00:14:17.503 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:17.764 [2024-07-15 10:21:54.759442] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:17.764 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:17.764 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2872394 00:14:17.764 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:17.764 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2872394 /var/tmp/bdevperf.sock 00:14:17.764 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:17.764 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 2872394 ']' 00:14:17.764 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:17.764 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.764 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:17.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:17.764 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.764 10:21:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:17.764 [2024-07-15 10:21:54.959155] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:17.764 [2024-07-15 10:21:54.959208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2872394 ] 00:14:18.025 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.025 [2024-07-15 10:21:55.042256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.025 [2024-07-15 10:21:55.106809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.596 10:21:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.596 10:21:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:18.596 10:21:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:18.856 Nvme0n1 00:14:18.856 10:21:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:19.117 [ 00:14:19.117 { 00:14:19.117 "name": "Nvme0n1", 00:14:19.117 "aliases": [ 00:14:19.117 "8e4a7e1c-8777-4814-8f8e-538be5cbfdc8" 00:14:19.117 ], 00:14:19.117 "product_name": "NVMe disk", 00:14:19.118 "block_size": 4096, 00:14:19.118 "num_blocks": 38912, 00:14:19.118 "uuid": "8e4a7e1c-8777-4814-8f8e-538be5cbfdc8", 00:14:19.118 "assigned_rate_limits": { 00:14:19.118 "rw_ios_per_sec": 0, 00:14:19.118 "rw_mbytes_per_sec": 0, 00:14:19.118 "r_mbytes_per_sec": 0, 00:14:19.118 "w_mbytes_per_sec": 0 00:14:19.118 }, 00:14:19.118 "claimed": false, 00:14:19.118 "zoned": false, 00:14:19.118 "supported_io_types": { 00:14:19.118 "read": true, 00:14:19.118 "write": true, 00:14:19.118 "unmap": true, 00:14:19.118 "flush": true, 00:14:19.118 "reset": true, 00:14:19.118 "nvme_admin": true, 00:14:19.118 "nvme_io": true, 00:14:19.118 "nvme_io_md": false, 00:14:19.118 "write_zeroes": true, 00:14:19.118 "zcopy": false, 00:14:19.118 "get_zone_info": false, 00:14:19.118 "zone_management": false, 00:14:19.118 "zone_append": false, 00:14:19.118 "compare": true, 00:14:19.118 "compare_and_write": true, 00:14:19.118 "abort": true, 00:14:19.118 "seek_hole": false, 00:14:19.118 "seek_data": false, 00:14:19.118 "copy": true, 00:14:19.118 "nvme_iov_md": false 00:14:19.118 }, 00:14:19.118 "memory_domains": [ 00:14:19.118 { 00:14:19.118 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:14:19.118 "dma_device_type": 0 00:14:19.118 } 00:14:19.118 ], 00:14:19.118 "driver_specific": { 00:14:19.118 "nvme": [ 00:14:19.118 { 00:14:19.118 "trid": { 00:14:19.118 "trtype": "RDMA", 00:14:19.118 "adrfam": "IPv4", 00:14:19.118 "traddr": "192.168.100.8", 00:14:19.118 "trsvcid": "4420", 00:14:19.118 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:19.118 }, 00:14:19.118 "ctrlr_data": { 00:14:19.118 "cntlid": 1, 00:14:19.118 "vendor_id": "0x8086", 00:14:19.118 "model_number": "SPDK bdev Controller", 00:14:19.118 "serial_number": "SPDK0", 00:14:19.118 "firmware_revision": "24.09", 00:14:19.118 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:19.118 "oacs": { 00:14:19.118 "security": 0, 00:14:19.118 "format": 0, 00:14:19.118 "firmware": 0, 00:14:19.118 "ns_manage": 0 00:14:19.118 }, 00:14:19.118 "multi_ctrlr": true, 00:14:19.118 "ana_reporting": false 00:14:19.118 }, 00:14:19.118 "vs": { 00:14:19.118 "nvme_version": "1.3" 00:14:19.118 }, 00:14:19.118 "ns_data": { 00:14:19.118 "id": 1, 00:14:19.118 "can_share": true 00:14:19.118 } 00:14:19.118 } 00:14:19.118 ], 00:14:19.118 "mp_policy": "active_passive" 00:14:19.118 } 00:14:19.118 } 00:14:19.118 ] 00:14:19.118 10:21:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2872481 00:14:19.118 10:21:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:19.118 10:21:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:19.118 Running I/O for 10 seconds... 00:14:20.062 Latency(us) 00:14:20.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.062 Nvme0n1 : 1.00 25793.00 100.75 0.00 0.00 0.00 0.00 0.00 00:14:20.062 =================================================================================================================== 00:14:20.062 Total : 25793.00 100.75 0.00 0.00 0.00 0.00 0.00 00:14:20.062 00:14:21.005 10:21:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:21.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.293 Nvme0n1 : 2.00 26096.00 101.94 0.00 0.00 0.00 0.00 0.00 00:14:21.293 =================================================================================================================== 00:14:21.293 Total : 26096.00 101.94 0.00 0.00 0.00 0.00 0.00 00:14:21.293 00:14:21.293 true 00:14:21.293 10:21:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:21.293 10:21:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:21.293 10:21:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:21.293 10:21:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:21.293 10:21:58 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2872481 00:14:22.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.233 Nvme0n1 : 3.00 26208.00 102.38 0.00 0.00 0.00 0.00 0.00 00:14:22.233 =================================================================================================================== 00:14:22.233 Total : 26208.00 102.38 0.00 0.00 0.00 0.00 0.00 00:14:22.233 00:14:23.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.174 Nvme0n1 : 4.00 26273.25 102.63 0.00 0.00 0.00 0.00 0.00 00:14:23.174 =================================================================================================================== 00:14:23.174 Total : 26273.25 102.63 0.00 0.00 0.00 0.00 0.00 00:14:23.174 00:14:24.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.112 Nvme0n1 : 5.00 26324.20 102.83 0.00 0.00 0.00 0.00 0.00 00:14:24.112 =================================================================================================================== 00:14:24.112 Total : 26324.20 102.83 0.00 0.00 0.00 0.00 0.00 00:14:24.112 00:14:25.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.053 Nvme0n1 : 6.00 26367.83 103.00 0.00 0.00 0.00 0.00 0.00 00:14:25.053 =================================================================================================================== 00:14:25.053 Total : 26367.83 103.00 0.00 0.00 0.00 0.00 0.00 00:14:25.053 00:14:26.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.438 Nvme0n1 : 7.00 26396.00 103.11 0.00 0.00 0.00 0.00 0.00 00:14:26.438 =================================================================================================================== 00:14:26.438 Total : 26396.00 103.11 0.00 0.00 0.00 0.00 0.00 00:14:26.438 00:14:27.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.379 Nvme0n1 : 8.00 26423.88 103.22 0.00 0.00 0.00 0.00 0.00 00:14:27.379 =================================================================================================================== 00:14:27.379 Total : 26423.88 103.22 0.00 0.00 0.00 0.00 0.00 00:14:27.379 00:14:28.324 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.324 Nvme0n1 : 9.00 26439.44 103.28 0.00 0.00 0.00 0.00 0.00 00:14:28.324 =================================================================================================================== 00:14:28.324 Total : 26439.44 103.28 0.00 0.00 0.00 0.00 0.00 00:14:28.324 00:14:29.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.266 Nvme0n1 : 10.00 26457.50 103.35 0.00 0.00 0.00 0.00 0.00 00:14:29.266 =================================================================================================================== 00:14:29.266 Total : 26457.50 103.35 0.00 0.00 0.00 0.00 0.00 00:14:29.266 00:14:29.266 00:14:29.266 Latency(us) 00:14:29.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.266 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.266 Nvme0n1 : 10.00 26458.06 103.35 0.00 0.00 4833.85 3317.76 19879.25 00:14:29.266 =================================================================================================================== 00:14:29.266 Total : 26458.06 103.35 0.00 0.00 4833.85 3317.76 19879.25 00:14:29.266 0 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2872394 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 2872394 ']' 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 2872394 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2872394 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2872394' 00:14:29.266 killing process with pid 2872394 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 2872394 00:14:29.266 Received shutdown signal, test time was about 10.000000 seconds 00:14:29.266 00:14:29.266 Latency(us) 00:14:29.266 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.266 =================================================================================================================== 00:14:29.266 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 2872394 00:14:29.266 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:29.527 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:29.789 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:29.789 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:29.789 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:29.789 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:29.790 10:22:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:30.051 [2024-07-15 10:22:07.070036] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:30.051 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:30.312 request: 00:14:30.312 { 00:14:30.312 "uuid": "fa2da3d3-3123-4b88-ad74-180694dbc55d", 00:14:30.312 "method": "bdev_lvol_get_lvstores", 00:14:30.312 "req_id": 1 00:14:30.312 } 00:14:30.312 Got JSON-RPC error response 00:14:30.312 response: 00:14:30.312 { 00:14:30.312 "code": -19, 00:14:30.312 "message": "No such device" 00:14:30.312 } 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:30.312 aio_bdev 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8e4a7e1c-8777-4814-8f8e-538be5cbfdc8 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=8e4a7e1c-8777-4814-8f8e-538be5cbfdc8 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:30.312 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:30.573 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8e4a7e1c-8777-4814-8f8e-538be5cbfdc8 -t 2000 00:14:30.573 [ 00:14:30.573 { 00:14:30.573 "name": "8e4a7e1c-8777-4814-8f8e-538be5cbfdc8", 00:14:30.573 "aliases": [ 00:14:30.573 "lvs/lvol" 00:14:30.573 ], 00:14:30.573 "product_name": "Logical Volume", 00:14:30.573 "block_size": 4096, 00:14:30.573 "num_blocks": 38912, 00:14:30.573 "uuid": "8e4a7e1c-8777-4814-8f8e-538be5cbfdc8", 00:14:30.573 "assigned_rate_limits": { 00:14:30.573 "rw_ios_per_sec": 0, 00:14:30.573 "rw_mbytes_per_sec": 0, 00:14:30.573 "r_mbytes_per_sec": 0, 00:14:30.573 "w_mbytes_per_sec": 0 00:14:30.573 }, 00:14:30.573 "claimed": false, 00:14:30.573 "zoned": false, 00:14:30.573 "supported_io_types": { 00:14:30.573 "read": true, 00:14:30.573 "write": true, 00:14:30.573 "unmap": true, 00:14:30.573 "flush": false, 00:14:30.573 "reset": true, 00:14:30.573 "nvme_admin": false, 00:14:30.573 "nvme_io": false, 00:14:30.573 "nvme_io_md": false, 00:14:30.573 "write_zeroes": true, 00:14:30.573 "zcopy": false, 00:14:30.573 "get_zone_info": false, 00:14:30.573 "zone_management": false, 00:14:30.573 "zone_append": false, 00:14:30.573 "compare": false, 00:14:30.573 "compare_and_write": false, 00:14:30.573 "abort": false, 00:14:30.574 "seek_hole": true, 00:14:30.574 "seek_data": true, 00:14:30.574 "copy": false, 00:14:30.574 "nvme_iov_md": false 00:14:30.574 }, 00:14:30.574 "driver_specific": { 00:14:30.574 "lvol": { 00:14:30.574 "lvol_store_uuid": "fa2da3d3-3123-4b88-ad74-180694dbc55d", 00:14:30.574 "base_bdev": "aio_bdev", 00:14:30.574 "thin_provision": false, 00:14:30.574 "num_allocated_clusters": 38, 00:14:30.574 "snapshot": false, 00:14:30.574 "clone": false, 00:14:30.574 "esnap_clone": false 00:14:30.574 } 00:14:30.574 } 00:14:30.574 } 00:14:30.574 ] 00:14:30.574 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:14:30.574 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:30.574 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:30.835 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:30.835 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:30.835 10:22:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:30.835 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:30.835 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8e4a7e1c-8777-4814-8f8e-538be5cbfdc8 00:14:31.096 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa2da3d3-3123-4b88-ad74-180694dbc55d 00:14:31.356 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:31.356 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:31.617 00:14:31.617 real 0m15.167s 00:14:31.617 user 0m15.197s 00:14:31.617 sys 0m0.969s 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:31.617 ************************************ 00:14:31.617 END TEST lvs_grow_clean 00:14:31.617 ************************************ 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:31.617 ************************************ 00:14:31.617 START TEST lvs_grow_dirty 00:14:31.617 ************************************ 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:31.617 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:31.878 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:31.878 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:31.878 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:31.878 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:31.878 10:22:08 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:32.139 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:32.139 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:32.139 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 lvol 150 00:14:32.139 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0d3602a7-d40b-4859-b100-82096f61b141 00:14:32.139 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:32.139 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:32.401 [2024-07-15 10:22:09.397649] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:32.401 [2024-07-15 10:22:09.397703] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:32.401 true 00:14:32.401 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:32.401 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:32.401 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:32.401 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:32.661 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0d3602a7-d40b-4859-b100-82096f61b141 00:14:32.661 10:22:09 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:32.923 [2024-07-15 10:22:09.991676] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:32.923 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:33.183 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2875364 00:14:33.183 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.183 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:33.183 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2875364 /var/tmp/bdevperf.sock 00:14:33.183 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2875364 ']' 00:14:33.183 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.183 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:33.183 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.183 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:33.183 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:33.183 [2024-07-15 10:22:10.205887] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:33.183 [2024-07-15 10:22:10.205941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2875364 ] 00:14:33.183 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.183 [2024-07-15 10:22:10.287310] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.183 [2024-07-15 10:22:10.351637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.126 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:34.126 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:34.126 10:22:10 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:34.126 Nvme0n1 00:14:34.126 10:22:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:34.387 [ 00:14:34.387 { 00:14:34.387 "name": "Nvme0n1", 00:14:34.387 "aliases": [ 00:14:34.387 "0d3602a7-d40b-4859-b100-82096f61b141" 00:14:34.387 ], 00:14:34.387 "product_name": "NVMe disk", 00:14:34.387 "block_size": 4096, 00:14:34.387 "num_blocks": 38912, 00:14:34.387 "uuid": "0d3602a7-d40b-4859-b100-82096f61b141", 00:14:34.387 "assigned_rate_limits": { 00:14:34.387 "rw_ios_per_sec": 0, 00:14:34.387 "rw_mbytes_per_sec": 0, 00:14:34.387 "r_mbytes_per_sec": 0, 00:14:34.387 "w_mbytes_per_sec": 0 00:14:34.387 }, 00:14:34.387 "claimed": false, 00:14:34.387 "zoned": false, 00:14:34.387 "supported_io_types": { 00:14:34.387 "read": true, 00:14:34.387 "write": true, 00:14:34.387 "unmap": true, 00:14:34.387 "flush": true, 00:14:34.387 "reset": true, 00:14:34.387 "nvme_admin": true, 00:14:34.387 "nvme_io": true, 00:14:34.387 "nvme_io_md": false, 00:14:34.387 "write_zeroes": true, 00:14:34.387 "zcopy": false, 00:14:34.387 "get_zone_info": false, 00:14:34.387 "zone_management": false, 00:14:34.387 "zone_append": false, 00:14:34.387 "compare": true, 00:14:34.387 "compare_and_write": true, 00:14:34.387 "abort": true, 00:14:34.387 "seek_hole": false, 00:14:34.387 "seek_data": false, 00:14:34.387 "copy": true, 00:14:34.387 "nvme_iov_md": false 00:14:34.387 }, 00:14:34.387 "memory_domains": [ 00:14:34.387 { 00:14:34.387 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:14:34.387 "dma_device_type": 0 00:14:34.387 } 00:14:34.387 ], 00:14:34.387 "driver_specific": { 00:14:34.387 "nvme": [ 00:14:34.387 { 00:14:34.387 "trid": { 00:14:34.387 "trtype": "RDMA", 00:14:34.387 "adrfam": "IPv4", 00:14:34.387 "traddr": "192.168.100.8", 00:14:34.387 "trsvcid": "4420", 00:14:34.387 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:34.387 }, 00:14:34.387 "ctrlr_data": { 00:14:34.387 "cntlid": 1, 00:14:34.387 "vendor_id": "0x8086", 00:14:34.387 "model_number": "SPDK bdev Controller", 00:14:34.387 "serial_number": "SPDK0", 00:14:34.387 "firmware_revision": "24.09", 00:14:34.387 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:34.387 "oacs": { 00:14:34.387 "security": 0, 00:14:34.387 "format": 0, 00:14:34.387 "firmware": 0, 00:14:34.387 "ns_manage": 0 00:14:34.387 }, 00:14:34.387 "multi_ctrlr": true, 00:14:34.387 "ana_reporting": false 00:14:34.387 }, 00:14:34.387 "vs": { 00:14:34.388 "nvme_version": "1.3" 00:14:34.388 }, 00:14:34.388 "ns_data": { 00:14:34.388 "id": 1, 00:14:34.388 "can_share": true 00:14:34.388 } 00:14:34.388 } 00:14:34.388 ], 00:14:34.388 "mp_policy": "active_passive" 00:14:34.388 } 00:14:34.388 } 00:14:34.388 ] 00:14:34.388 10:22:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2875508 00:14:34.388 10:22:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:34.388 10:22:11 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:34.388 Running I/O for 10 seconds... 00:14:35.353 Latency(us) 00:14:35.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.353 Nvme0n1 : 1.00 25987.00 101.51 0.00 0.00 0.00 0.00 0.00 00:14:35.353 =================================================================================================================== 00:14:35.353 Total : 25987.00 101.51 0.00 0.00 0.00 0.00 0.00 00:14:35.353 00:14:36.341 10:22:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:36.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.341 Nvme0n1 : 2.00 26193.00 102.32 0.00 0.00 0.00 0.00 0.00 00:14:36.341 =================================================================================================================== 00:14:36.341 Total : 26193.00 102.32 0.00 0.00 0.00 0.00 0.00 00:14:36.341 00:14:36.602 true 00:14:36.602 10:22:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:36.602 10:22:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:36.602 10:22:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:36.602 10:22:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:36.602 10:22:13 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2875508 00:14:37.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.545 Nvme0n1 : 3.00 26272.00 102.62 0.00 0.00 0.00 0.00 0.00 00:14:37.545 =================================================================================================================== 00:14:37.545 Total : 26272.00 102.62 0.00 0.00 0.00 0.00 0.00 00:14:37.545 00:14:38.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.489 Nvme0n1 : 4.00 26336.25 102.88 0.00 0.00 0.00 0.00 0.00 00:14:38.489 =================================================================================================================== 00:14:38.489 Total : 26336.25 102.88 0.00 0.00 0.00 0.00 0.00 00:14:38.489 00:14:39.433 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.433 Nvme0n1 : 5.00 26380.60 103.05 0.00 0.00 0.00 0.00 0.00 00:14:39.433 =================================================================================================================== 00:14:39.433 Total : 26380.60 103.05 0.00 0.00 0.00 0.00 0.00 00:14:39.433 00:14:40.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.374 Nvme0n1 : 6.00 26410.33 103.17 0.00 0.00 0.00 0.00 0.00 00:14:40.374 =================================================================================================================== 00:14:40.374 Total : 26410.33 103.17 0.00 0.00 0.00 0.00 0.00 00:14:40.374 00:14:41.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.316 Nvme0n1 : 7.00 26436.43 103.27 0.00 0.00 0.00 0.00 0.00 00:14:41.316 =================================================================================================================== 00:14:41.316 Total : 26436.43 103.27 0.00 0.00 0.00 0.00 0.00 00:14:41.316 00:14:42.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.723 Nvme0n1 : 8.00 26455.88 103.34 0.00 0.00 0.00 0.00 0.00 00:14:42.723 =================================================================================================================== 00:14:42.723 Total : 26455.88 103.34 0.00 0.00 0.00 0.00 0.00 00:14:42.723 00:14:43.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.293 Nvme0n1 : 9.00 26467.56 103.39 0.00 0.00 0.00 0.00 0.00 00:14:43.293 =================================================================================================================== 00:14:43.293 Total : 26467.56 103.39 0.00 0.00 0.00 0.00 0.00 00:14:43.293 00:14:44.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.685 Nvme0n1 : 10.00 26479.70 103.44 0.00 0.00 0.00 0.00 0.00 00:14:44.685 =================================================================================================================== 00:14:44.685 Total : 26479.70 103.44 0.00 0.00 0.00 0.00 0.00 00:14:44.685 00:14:44.685 00:14:44.685 Latency(us) 00:14:44.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.685 Nvme0n1 : 10.00 26480.46 103.44 0.00 0.00 4830.06 3631.79 11796.48 00:14:44.685 =================================================================================================================== 00:14:44.685 Total : 26480.46 103.44 0.00 0.00 4830.06 3631.79 11796.48 00:14:44.685 0 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2875364 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 2875364 ']' 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 2875364 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2875364 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2875364' 00:14:44.685 killing process with pid 2875364 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 2875364 00:14:44.685 Received shutdown signal, test time was about 10.000000 seconds 00:14:44.685 00:14:44.685 Latency(us) 00:14:44.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.685 =================================================================================================================== 00:14:44.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 2875364 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:44.685 10:22:21 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:44.945 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:44.945 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2871702 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2871702 00:14:45.205 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2871702 Killed "${NVMF_APP[@]}" "$@" 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2877806 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2877806 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 2877806 ']' 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:45.205 10:22:22 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:45.206 [2024-07-15 10:22:22.337551] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:45.206 [2024-07-15 10:22:22.337628] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.206 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.465 [2024-07-15 10:22:22.404921] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.465 [2024-07-15 10:22:22.469744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.465 [2024-07-15 10:22:22.469781] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.465 [2024-07-15 10:22:22.469789] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.465 [2024-07-15 10:22:22.469795] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.465 [2024-07-15 10:22:22.469801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.465 [2024-07-15 10:22:22.469817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.036 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:46.036 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:14:46.036 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:46.036 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:46.036 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:46.036 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.036 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:46.295 [2024-07-15 10:22:23.266628] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:46.295 [2024-07-15 10:22:23.266720] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:46.295 [2024-07-15 10:22:23.266748] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:46.295 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:46.295 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0d3602a7-d40b-4859-b100-82096f61b141 00:14:46.295 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0d3602a7-d40b-4859-b100-82096f61b141 00:14:46.295 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.295 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:46.295 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.295 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.295 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:46.295 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d3602a7-d40b-4859-b100-82096f61b141 -t 2000 00:14:46.555 [ 00:14:46.556 { 00:14:46.556 "name": "0d3602a7-d40b-4859-b100-82096f61b141", 00:14:46.556 "aliases": [ 00:14:46.556 "lvs/lvol" 00:14:46.556 ], 00:14:46.556 "product_name": "Logical Volume", 00:14:46.556 "block_size": 4096, 00:14:46.556 "num_blocks": 38912, 00:14:46.556 "uuid": "0d3602a7-d40b-4859-b100-82096f61b141", 00:14:46.556 "assigned_rate_limits": { 00:14:46.556 "rw_ios_per_sec": 0, 00:14:46.556 "rw_mbytes_per_sec": 0, 00:14:46.556 "r_mbytes_per_sec": 0, 00:14:46.556 "w_mbytes_per_sec": 0 00:14:46.556 }, 00:14:46.556 "claimed": false, 00:14:46.556 "zoned": false, 00:14:46.556 "supported_io_types": { 00:14:46.556 "read": true, 00:14:46.556 "write": true, 00:14:46.556 "unmap": true, 00:14:46.556 "flush": false, 00:14:46.556 "reset": true, 00:14:46.556 "nvme_admin": false, 00:14:46.556 "nvme_io": false, 00:14:46.556 "nvme_io_md": false, 00:14:46.556 "write_zeroes": true, 00:14:46.556 "zcopy": false, 00:14:46.556 "get_zone_info": false, 00:14:46.556 "zone_management": false, 00:14:46.556 "zone_append": false, 00:14:46.556 "compare": false, 00:14:46.556 "compare_and_write": false, 00:14:46.556 "abort": false, 00:14:46.556 "seek_hole": true, 00:14:46.556 "seek_data": true, 00:14:46.556 "copy": false, 00:14:46.556 "nvme_iov_md": false 00:14:46.556 }, 00:14:46.556 "driver_specific": { 00:14:46.556 "lvol": { 00:14:46.556 "lvol_store_uuid": "87f8627d-b8f1-4a1a-aa37-8c2a3127c685", 00:14:46.556 "base_bdev": "aio_bdev", 00:14:46.556 "thin_provision": false, 00:14:46.556 "num_allocated_clusters": 38, 00:14:46.556 "snapshot": false, 00:14:46.556 "clone": false, 00:14:46.556 "esnap_clone": false 00:14:46.556 } 00:14:46.556 } 00:14:46.556 } 00:14:46.556 ] 00:14:46.556 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:46.556 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:46.556 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:46.556 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:46.556 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:46.556 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:46.815 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:46.815 10:22:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:46.815 [2024-07-15 10:22:24.006483] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:47.075 request: 00:14:47.075 { 00:14:47.075 "uuid": "87f8627d-b8f1-4a1a-aa37-8c2a3127c685", 00:14:47.075 "method": "bdev_lvol_get_lvstores", 00:14:47.075 "req_id": 1 00:14:47.075 } 00:14:47.075 Got JSON-RPC error response 00:14:47.075 response: 00:14:47.075 { 00:14:47.075 "code": -19, 00:14:47.075 "message": "No such device" 00:14:47.075 } 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:47.075 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:47.335 aio_bdev 00:14:47.336 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0d3602a7-d40b-4859-b100-82096f61b141 00:14:47.336 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=0d3602a7-d40b-4859-b100-82096f61b141 00:14:47.336 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:47.336 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:14:47.336 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:47.336 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:47.336 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:47.597 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d3602a7-d40b-4859-b100-82096f61b141 -t 2000 00:14:47.597 [ 00:14:47.597 { 00:14:47.597 "name": "0d3602a7-d40b-4859-b100-82096f61b141", 00:14:47.597 "aliases": [ 00:14:47.597 "lvs/lvol" 00:14:47.597 ], 00:14:47.597 "product_name": "Logical Volume", 00:14:47.597 "block_size": 4096, 00:14:47.597 "num_blocks": 38912, 00:14:47.597 "uuid": "0d3602a7-d40b-4859-b100-82096f61b141", 00:14:47.597 "assigned_rate_limits": { 00:14:47.597 "rw_ios_per_sec": 0, 00:14:47.597 "rw_mbytes_per_sec": 0, 00:14:47.597 "r_mbytes_per_sec": 0, 00:14:47.597 "w_mbytes_per_sec": 0 00:14:47.597 }, 00:14:47.597 "claimed": false, 00:14:47.597 "zoned": false, 00:14:47.597 "supported_io_types": { 00:14:47.597 "read": true, 00:14:47.597 "write": true, 00:14:47.597 "unmap": true, 00:14:47.597 "flush": false, 00:14:47.597 "reset": true, 00:14:47.597 "nvme_admin": false, 00:14:47.597 "nvme_io": false, 00:14:47.597 "nvme_io_md": false, 00:14:47.597 "write_zeroes": true, 00:14:47.597 "zcopy": false, 00:14:47.597 "get_zone_info": false, 00:14:47.597 "zone_management": false, 00:14:47.597 "zone_append": false, 00:14:47.597 "compare": false, 00:14:47.597 "compare_and_write": false, 00:14:47.597 "abort": false, 00:14:47.597 "seek_hole": true, 00:14:47.597 "seek_data": true, 00:14:47.597 "copy": false, 00:14:47.597 "nvme_iov_md": false 00:14:47.597 }, 00:14:47.597 "driver_specific": { 00:14:47.597 "lvol": { 00:14:47.597 "lvol_store_uuid": "87f8627d-b8f1-4a1a-aa37-8c2a3127c685", 00:14:47.597 "base_bdev": "aio_bdev", 00:14:47.597 "thin_provision": false, 00:14:47.597 "num_allocated_clusters": 38, 00:14:47.597 "snapshot": false, 00:14:47.597 "clone": false, 00:14:47.597 "esnap_clone": false 00:14:47.597 } 00:14:47.597 } 00:14:47.597 } 00:14:47.597 ] 00:14:47.597 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:14:47.597 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:47.597 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:47.858 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:47.858 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:47.858 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:47.858 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:47.858 10:22:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0d3602a7-d40b-4859-b100-82096f61b141 00:14:48.118 10:22:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 87f8627d-b8f1-4a1a-aa37-8c2a3127c685 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:48.379 00:14:48.379 real 0m16.883s 00:14:48.379 user 0m44.664s 00:14:48.379 sys 0m2.380s 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:48.379 ************************************ 00:14:48.379 END TEST lvs_grow_dirty 00:14:48.379 ************************************ 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:14:48.379 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:48.641 nvmf_trace.0 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:48.641 rmmod nvme_rdma 00:14:48.641 rmmod nvme_fabrics 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2877806 ']' 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2877806 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 2877806 ']' 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 2877806 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2877806 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2877806' 00:14:48.641 killing process with pid 2877806 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 2877806 00:14:48.641 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 2877806 00:14:48.903 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:48.903 10:22:25 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:48.903 00:14:48.903 real 0m41.787s 00:14:48.903 user 1m6.023s 00:14:48.903 sys 0m9.801s 00:14:48.903 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:48.903 10:22:25 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:48.903 ************************************ 00:14:48.903 END TEST nvmf_lvs_grow 00:14:48.903 ************************************ 00:14:48.903 10:22:25 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:14:48.903 10:22:25 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:14:48.903 10:22:25 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:48.903 10:22:25 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:48.903 10:22:25 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:48.903 ************************************ 00:14:48.903 START TEST nvmf_bdev_io_wait 00:14:48.903 ************************************ 00:14:48.903 10:22:25 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:14:48.903 * Looking for test storage... 00:14:48.903 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.903 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.904 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:48.904 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:48.904 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:48.904 10:22:26 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:14:57.050 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:14:57.050 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:14:57.050 Found net devices under 0000:98:00.0: mlx_0_0 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:14:57.050 Found net devices under 0000:98:00.1: mlx_0_1 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:57.050 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:57.051 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:57.051 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:14:57.051 altname enp152s0f0np0 00:14:57.051 altname ens817f0np0 00:14:57.051 inet 192.168.100.8/24 scope global mlx_0_0 00:14:57.051 valid_lft forever preferred_lft forever 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:57.051 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:57.051 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:14:57.051 altname enp152s0f1np1 00:14:57.051 altname ens817f1np1 00:14:57.051 inet 192.168.100.9/24 scope global mlx_0_1 00:14:57.051 valid_lft forever preferred_lft forever 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:57.051 192.168.100.9' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:57.051 192.168.100.9' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:57.051 192.168.100.9' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2882609 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2882609 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 2882609 ']' 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.051 10:22:33 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.051 [2024-07-15 10:22:34.025822] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:57.051 [2024-07-15 10:22:34.025874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.051 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.051 [2024-07-15 10:22:34.093204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.051 [2024-07-15 10:22:34.160374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.051 [2024-07-15 10:22:34.160411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.051 [2024-07-15 10:22:34.160419] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.051 [2024-07-15 10:22:34.160428] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.051 [2024-07-15 10:22:34.160433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.051 [2024-07-15 10:22:34.160568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.051 [2024-07-15 10:22:34.160693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.051 [2024-07-15 10:22:34.160848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.051 [2024-07-15 10:22:34.160849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.624 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:57.624 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:14:57.624 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.624 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:57.624 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.885 10:22:34 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:57.885 [2024-07-15 10:22:34.935836] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xadb3a0/0xadf890) succeed. 00:14:57.885 [2024-07-15 10:22:34.950131] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xadc9e0/0xb20f20) succeed. 00:14:57.885 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:58.147 Malloc0 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:58.147 [2024-07-15 10:22:35.136721] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2882951 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2882953 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:58.147 { 00:14:58.147 "params": { 00:14:58.147 "name": "Nvme$subsystem", 00:14:58.147 "trtype": "$TEST_TRANSPORT", 00:14:58.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:58.147 "adrfam": "ipv4", 00:14:58.147 "trsvcid": "$NVMF_PORT", 00:14:58.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:58.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:58.147 "hdgst": ${hdgst:-false}, 00:14:58.147 "ddgst": ${ddgst:-false} 00:14:58.147 }, 00:14:58.147 "method": "bdev_nvme_attach_controller" 00:14:58.147 } 00:14:58.147 EOF 00:14:58.147 )") 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2882955 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:58.147 { 00:14:58.147 "params": { 00:14:58.147 "name": "Nvme$subsystem", 00:14:58.147 "trtype": "$TEST_TRANSPORT", 00:14:58.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:58.147 "adrfam": "ipv4", 00:14:58.147 "trsvcid": "$NVMF_PORT", 00:14:58.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:58.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:58.147 "hdgst": ${hdgst:-false}, 00:14:58.147 "ddgst": ${ddgst:-false} 00:14:58.147 }, 00:14:58.147 "method": "bdev_nvme_attach_controller" 00:14:58.147 } 00:14:58.147 EOF 00:14:58.147 )") 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2882958 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:58.147 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:58.148 { 00:14:58.148 "params": { 00:14:58.148 "name": "Nvme$subsystem", 00:14:58.148 "trtype": "$TEST_TRANSPORT", 00:14:58.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:58.148 "adrfam": "ipv4", 00:14:58.148 "trsvcid": "$NVMF_PORT", 00:14:58.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:58.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:58.148 "hdgst": ${hdgst:-false}, 00:14:58.148 "ddgst": ${ddgst:-false} 00:14:58.148 }, 00:14:58.148 "method": "bdev_nvme_attach_controller" 00:14:58.148 } 00:14:58.148 EOF 00:14:58.148 )") 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:58.148 { 00:14:58.148 "params": { 00:14:58.148 "name": "Nvme$subsystem", 00:14:58.148 "trtype": "$TEST_TRANSPORT", 00:14:58.148 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:58.148 "adrfam": "ipv4", 00:14:58.148 "trsvcid": "$NVMF_PORT", 00:14:58.148 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:58.148 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:58.148 "hdgst": ${hdgst:-false}, 00:14:58.148 "ddgst": ${ddgst:-false} 00:14:58.148 }, 00:14:58.148 "method": "bdev_nvme_attach_controller" 00:14:58.148 } 00:14:58.148 EOF 00:14:58.148 )") 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2882951 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:58.148 "params": { 00:14:58.148 "name": "Nvme1", 00:14:58.148 "trtype": "rdma", 00:14:58.148 "traddr": "192.168.100.8", 00:14:58.148 "adrfam": "ipv4", 00:14:58.148 "trsvcid": "4420", 00:14:58.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:58.148 "hdgst": false, 00:14:58.148 "ddgst": false 00:14:58.148 }, 00:14:58.148 "method": "bdev_nvme_attach_controller" 00:14:58.148 }' 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:58.148 "params": { 00:14:58.148 "name": "Nvme1", 00:14:58.148 "trtype": "rdma", 00:14:58.148 "traddr": "192.168.100.8", 00:14:58.148 "adrfam": "ipv4", 00:14:58.148 "trsvcid": "4420", 00:14:58.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:58.148 "hdgst": false, 00:14:58.148 "ddgst": false 00:14:58.148 }, 00:14:58.148 "method": "bdev_nvme_attach_controller" 00:14:58.148 }' 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:58.148 "params": { 00:14:58.148 "name": "Nvme1", 00:14:58.148 "trtype": "rdma", 00:14:58.148 "traddr": "192.168.100.8", 00:14:58.148 "adrfam": "ipv4", 00:14:58.148 "trsvcid": "4420", 00:14:58.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:58.148 "hdgst": false, 00:14:58.148 "ddgst": false 00:14:58.148 }, 00:14:58.148 "method": "bdev_nvme_attach_controller" 00:14:58.148 }' 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:58.148 10:22:35 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:58.148 "params": { 00:14:58.148 "name": "Nvme1", 00:14:58.148 "trtype": "rdma", 00:14:58.148 "traddr": "192.168.100.8", 00:14:58.148 "adrfam": "ipv4", 00:14:58.148 "trsvcid": "4420", 00:14:58.148 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:58.148 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:58.148 "hdgst": false, 00:14:58.148 "ddgst": false 00:14:58.148 }, 00:14:58.148 "method": "bdev_nvme_attach_controller" 00:14:58.148 }' 00:14:58.148 [2024-07-15 10:22:35.188271] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:58.148 [2024-07-15 10:22:35.188272] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:58.148 [2024-07-15 10:22:35.188326] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 10:22:35.188326] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:58.148 --proc-type=auto ] 00:14:58.148 [2024-07-15 10:22:35.188910] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:58.148 [2024-07-15 10:22:35.188955] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:58.148 [2024-07-15 10:22:35.192747] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:58.148 [2024-07-15 10:22:35.192795] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:58.148 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.148 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.410 [2024-07-15 10:22:35.346560] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.410 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.410 [2024-07-15 10:22:35.397376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:14:58.410 [2024-07-15 10:22:35.406162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.410 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.410 [2024-07-15 10:22:35.456975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:58.410 [2024-07-15 10:22:35.466900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.410 [2024-07-15 10:22:35.514097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:58.410 [2024-07-15 10:22:35.517772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:58.410 [2024-07-15 10:22:35.563039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:58.672 Running I/O for 1 seconds... 00:14:58.672 Running I/O for 1 seconds... 00:14:58.672 Running I/O for 1 seconds... 00:14:58.672 Running I/O for 1 seconds... 00:14:59.616 00:14:59.616 Latency(us) 00:14:59.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.616 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:59.616 Nvme1n1 : 1.00 21790.93 85.12 0.00 0.00 5858.13 3932.16 15291.73 00:14:59.616 =================================================================================================================== 00:14:59.616 Total : 21790.93 85.12 0.00 0.00 5858.13 3932.16 15291.73 00:14:59.616 00:14:59.616 Latency(us) 00:14:59.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.616 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:59.616 Nvme1n1 : 1.00 17046.35 66.59 0.00 0.00 7485.51 4915.20 19442.35 00:14:59.616 =================================================================================================================== 00:14:59.616 Total : 17046.35 66.59 0.00 0.00 7485.51 4915.20 19442.35 00:14:59.616 00:14:59.616 Latency(us) 00:14:59.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.616 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:59.616 Nvme1n1 : 1.00 24314.79 94.98 0.00 0.00 5251.19 4041.39 16274.77 00:14:59.616 =================================================================================================================== 00:14:59.616 Total : 24314.79 94.98 0.00 0.00 5251.19 4041.39 16274.77 00:14:59.616 00:14:59.616 Latency(us) 00:14:59.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.616 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:59.616 Nvme1n1 : 1.00 188553.40 736.54 0.00 0.00 675.58 269.65 2443.95 00:14:59.616 =================================================================================================================== 00:14:59.616 Total : 188553.40 736.54 0.00 0.00 675.58 269.65 2443.95 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2882953 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2882955 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2882958 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:59.877 rmmod nvme_rdma 00:14:59.877 rmmod nvme_fabrics 00:14:59.877 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:59.878 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:59.878 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:59.878 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2882609 ']' 00:14:59.878 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2882609 00:14:59.878 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 2882609 ']' 00:14:59.878 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 2882609 00:14:59.878 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:14:59.878 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.878 10:22:36 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2882609 00:14:59.878 10:22:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:59.878 10:22:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:59.878 10:22:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2882609' 00:14:59.878 killing process with pid 2882609 00:14:59.878 10:22:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 2882609 00:14:59.878 10:22:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 2882609 00:15:00.138 10:22:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:00.138 10:22:37 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:00.138 00:15:00.138 real 0m11.285s 00:15:00.138 user 0m19.887s 00:15:00.138 sys 0m6.945s 00:15:00.138 10:22:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.138 10:22:37 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 ************************************ 00:15:00.138 END TEST nvmf_bdev_io_wait 00:15:00.138 ************************************ 00:15:00.138 10:22:37 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:00.138 10:22:37 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:15:00.138 10:22:37 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:00.138 10:22:37 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.138 10:22:37 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:00.138 ************************************ 00:15:00.138 START TEST nvmf_queue_depth 00:15:00.138 ************************************ 00:15:00.138 10:22:37 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:15:00.400 * Looking for test storage... 00:15:00.400 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:00.400 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:00.401 10:22:37 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:00.401 10:22:37 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:08.548 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:08.549 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:08.549 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:08.549 Found net devices under 0000:98:00.0: mlx_0_0 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:08.549 Found net devices under 0000:98:00.1: mlx_0_1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:08.549 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:08.549 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:15:08.549 altname enp152s0f0np0 00:15:08.549 altname ens817f0np0 00:15:08.549 inet 192.168.100.8/24 scope global mlx_0_0 00:15:08.549 valid_lft forever preferred_lft forever 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:08.549 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:08.549 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:15:08.549 altname enp152s0f1np1 00:15:08.549 altname ens817f1np1 00:15:08.549 inet 192.168.100.9/24 scope global mlx_0_1 00:15:08.549 valid_lft forever preferred_lft forever 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:08.549 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:08.549 192.168.100.9' 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:08.550 192.168.100.9' 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:08.550 192.168.100.9' 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2887648 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2887648 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2887648 ']' 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.550 10:22:45 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:08.550 [2024-07-15 10:22:45.495379] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:08.550 [2024-07-15 10:22:45.495432] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.550 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.550 [2024-07-15 10:22:45.580519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.550 [2024-07-15 10:22:45.653943] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.550 [2024-07-15 10:22:45.653995] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.550 [2024-07-15 10:22:45.654003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.550 [2024-07-15 10:22:45.654009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.550 [2024-07-15 10:22:45.654015] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.550 [2024-07-15 10:22:45.654050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.120 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:09.120 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:09.120 10:22:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:09.120 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:09.120 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:09.430 [2024-07-15 10:22:46.356022] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x86b360/0x86f850) succeed. 00:15:09.430 [2024-07-15 10:22:46.369866] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x86c860/0x8b0ee0) succeed. 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:09.430 Malloc0 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:09.430 [2024-07-15 10:22:46.473665] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2887746 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2887746 /var/tmp/bdevperf.sock 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 2887746 ']' 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.430 10:22:46 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:09.430 [2024-07-15 10:22:46.525463] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:09.430 [2024-07-15 10:22:46.525522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887746 ] 00:15:09.430 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.430 [2024-07-15 10:22:46.595695] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.691 [2024-07-15 10:22:46.670094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.261 10:22:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.261 10:22:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:10.261 10:22:47 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:10.261 10:22:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.261 10:22:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:10.261 NVMe0n1 00:15:10.261 10:22:47 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.261 10:22:47 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:10.521 Running I/O for 10 seconds... 00:15:20.526 00:15:20.526 Latency(us) 00:15:20.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.526 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:20.526 Verification LBA range: start 0x0 length 0x4000 00:15:20.526 NVMe0n1 : 10.02 15016.00 58.66 0.00 0.00 68008.61 21408.43 46749.01 00:15:20.526 =================================================================================================================== 00:15:20.526 Total : 15016.00 58.66 0.00 0.00 68008.61 21408.43 46749.01 00:15:20.526 0 00:15:20.526 10:22:57 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2887746 00:15:20.526 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2887746 ']' 00:15:20.526 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2887746 00:15:20.526 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:20.526 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.526 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2887746 00:15:20.526 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:20.526 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:20.526 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2887746' 00:15:20.526 killing process with pid 2887746 00:15:20.526 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2887746 00:15:20.526 Received shutdown signal, test time was about 10.000000 seconds 00:15:20.526 00:15:20.526 Latency(us) 00:15:20.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.526 =================================================================================================================== 00:15:20.526 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:20.526 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2887746 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:20.788 rmmod nvme_rdma 00:15:20.788 rmmod nvme_fabrics 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2887648 ']' 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2887648 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 2887648 ']' 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 2887648 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2887648 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2887648' 00:15:20.788 killing process with pid 2887648 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 2887648 00:15:20.788 10:22:57 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 2887648 00:15:21.050 10:22:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.050 10:22:58 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:21.050 00:15:21.050 real 0m20.727s 00:15:21.050 user 0m26.356s 00:15:21.050 sys 0m6.548s 00:15:21.050 10:22:58 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.050 10:22:58 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:21.050 ************************************ 00:15:21.050 END TEST nvmf_queue_depth 00:15:21.050 ************************************ 00:15:21.050 10:22:58 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:21.050 10:22:58 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:15:21.050 10:22:58 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:21.050 10:22:58 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.050 10:22:58 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:21.050 ************************************ 00:15:21.050 START TEST nvmf_target_multipath 00:15:21.050 ************************************ 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:15:21.050 * Looking for test storage... 00:15:21.050 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:21.050 10:22:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:21.051 10:22:58 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:29.314 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:29.314 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:29.314 Found net devices under 0000:98:00.0: mlx_0_0 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:29.314 Found net devices under 0000:98:00.1: mlx_0_1 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.314 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:29.315 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:29.315 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:15:29.315 altname enp152s0f0np0 00:15:29.315 altname ens817f0np0 00:15:29.315 inet 192.168.100.8/24 scope global mlx_0_0 00:15:29.315 valid_lft forever preferred_lft forever 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:29.315 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:29.315 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:15:29.315 altname enp152s0f1np1 00:15:29.315 altname ens817f1np1 00:15:29.315 inet 192.168.100.9/24 scope global mlx_0_1 00:15:29.315 valid_lft forever preferred_lft forever 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:29.315 192.168.100.9' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:29.315 192.168.100.9' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:29.315 192.168.100.9' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:15:29.315 run this test only with TCP transport for now 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:29.315 rmmod nvme_rdma 00:15:29.315 rmmod nvme_fabrics 00:15:29.315 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:29.316 00:15:29.316 real 0m8.253s 00:15:29.316 user 0m2.329s 00:15:29.316 sys 0m6.021s 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:29.316 10:23:06 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:29.316 ************************************ 00:15:29.316 END TEST nvmf_target_multipath 00:15:29.316 ************************************ 00:15:29.316 10:23:06 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:29.316 10:23:06 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:15:29.316 10:23:06 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:29.316 10:23:06 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.316 10:23:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:29.316 ************************************ 00:15:29.316 START TEST nvmf_zcopy 00:15:29.316 ************************************ 00:15:29.316 10:23:06 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:15:29.576 * Looking for test storage... 00:15:29.576 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.576 10:23:06 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:29.577 10:23:06 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:37.723 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:37.724 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:37.724 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:37.724 Found net devices under 0000:98:00.0: mlx_0_0 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:37.724 Found net devices under 0000:98:00.1: mlx_0_1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:37.724 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:37.724 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:15:37.724 altname enp152s0f0np0 00:15:37.724 altname ens817f0np0 00:15:37.724 inet 192.168.100.8/24 scope global mlx_0_0 00:15:37.724 valid_lft forever preferred_lft forever 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:37.724 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:37.724 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:15:37.724 altname enp152s0f1np1 00:15:37.724 altname ens817f1np1 00:15:37.724 inet 192.168.100.9/24 scope global mlx_0_1 00:15:37.724 valid_lft forever preferred_lft forever 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:37.724 192.168.100.9' 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:37.724 192.168.100.9' 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:15:37.724 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:37.725 192.168.100.9' 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2898540 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2898540 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 2898540 ']' 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:37.725 10:23:14 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:37.725 [2024-07-15 10:23:14.689057] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:37.725 [2024-07-15 10:23:14.689125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.725 EAL: No free 2048 kB hugepages reported on node 1 00:15:37.725 [2024-07-15 10:23:14.779073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.725 [2024-07-15 10:23:14.871957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.725 [2024-07-15 10:23:14.872020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.725 [2024-07-15 10:23:14.872029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.725 [2024-07-15 10:23:14.872036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.725 [2024-07-15 10:23:14.872043] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.725 [2024-07-15 10:23:14.872073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.298 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.298 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:15:38.298 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:38.298 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:38.298 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:15:38.559 Unsupported transport: rdma 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # type=--id 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@807 -- # id=0 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:38.559 nvmf_trace.0 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@821 -- # return 0 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:38.559 rmmod nvme_rdma 00:15:38.559 rmmod nvme_fabrics 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2898540 ']' 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2898540 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 2898540 ']' 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 2898540 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2898540 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2898540' 00:15:38.559 killing process with pid 2898540 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 2898540 00:15:38.559 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 2898540 00:15:38.820 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:38.820 10:23:15 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:38.820 00:15:38.820 real 0m9.418s 00:15:38.820 user 0m3.758s 00:15:38.820 sys 0m6.308s 00:15:38.820 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:38.820 10:23:15 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:38.820 ************************************ 00:15:38.820 END TEST nvmf_zcopy 00:15:38.820 ************************************ 00:15:38.820 10:23:15 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:38.820 10:23:15 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:15:38.820 10:23:15 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:38.820 10:23:15 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:38.820 10:23:15 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:38.820 ************************************ 00:15:38.820 START TEST nvmf_nmic 00:15:38.820 ************************************ 00:15:38.820 10:23:15 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:15:39.082 * Looking for test storage... 00:15:39.082 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.082 10:23:16 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.083 10:23:16 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.083 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:39.083 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:39.083 10:23:16 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:15:39.083 10:23:16 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.224 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:15:47.225 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:15:47.225 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:15:47.225 Found net devices under 0000:98:00.0: mlx_0_0 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:15:47.225 Found net devices under 0000:98:00.1: mlx_0_1 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:47.225 10:23:23 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:15:47.225 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:47.225 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:15:47.225 altname enp152s0f0np0 00:15:47.225 altname ens817f0np0 00:15:47.225 inet 192.168.100.8/24 scope global mlx_0_0 00:15:47.225 valid_lft forever preferred_lft forever 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:15:47.225 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:47.225 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:15:47.225 altname enp152s0f1np1 00:15:47.225 altname ens817f1np1 00:15:47.225 inet 192.168.100.9/24 scope global mlx_0_1 00:15:47.225 valid_lft forever preferred_lft forever 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:47.225 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:47.226 192.168.100.9' 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:47.226 192.168.100.9' 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:47.226 192.168.100.9' 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2903045 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2903045 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 2903045 ']' 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.226 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:47.226 [2024-07-15 10:23:24.183512] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:47.226 [2024-07-15 10:23:24.183581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.226 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.226 [2024-07-15 10:23:24.254893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:47.226 [2024-07-15 10:23:24.331821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.226 [2024-07-15 10:23:24.331861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.226 [2024-07-15 10:23:24.331868] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.226 [2024-07-15 10:23:24.331875] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.226 [2024-07-15 10:23:24.331880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.226 [2024-07-15 10:23:24.332027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.226 [2024-07-15 10:23:24.332149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.226 [2024-07-15 10:23:24.332306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.226 [2024-07-15 10:23:24.332307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:47.796 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.796 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:15:47.796 10:23:24 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:47.796 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:47.797 10:23:24 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.057 [2024-07-15 10:23:25.047766] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x141e200/0x14226f0) succeed. 00:15:48.057 [2024-07-15 10:23:25.061789] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x141f840/0x1463d80) succeed. 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.057 Malloc0 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.057 [2024-07-15 10:23:25.237321] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:48.057 test case1: single bdev can't be used in multiple subsystems 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.057 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.318 [2024-07-15 10:23:25.273171] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:48.318 [2024-07-15 10:23:25.273189] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:48.318 [2024-07-15 10:23:25.273196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.318 request: 00:15:48.318 { 00:15:48.318 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:48.318 "namespace": { 00:15:48.318 "bdev_name": "Malloc0", 00:15:48.318 "no_auto_visible": false 00:15:48.318 }, 00:15:48.318 "method": "nvmf_subsystem_add_ns", 00:15:48.318 "req_id": 1 00:15:48.318 } 00:15:48.318 Got JSON-RPC error response 00:15:48.318 response: 00:15:48.318 { 00:15:48.318 "code": -32602, 00:15:48.318 "message": "Invalid parameters" 00:15:48.318 } 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:48.318 Adding namespace failed - expected result. 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:48.318 test case2: host connect to nvmf target in multiple paths 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:48.318 [2024-07-15 10:23:25.285240] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.318 10:23:25 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:49.698 10:23:26 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:15:51.081 10:23:28 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:51.081 10:23:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:15:51.081 10:23:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:51.081 10:23:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:51.081 10:23:28 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:15:53.047 10:23:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:53.047 10:23:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:53.047 10:23:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:53.047 10:23:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:53.047 10:23:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:53.047 10:23:30 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:15:53.047 10:23:30 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:53.047 [global] 00:15:53.047 thread=1 00:15:53.047 invalidate=1 00:15:53.047 rw=write 00:15:53.047 time_based=1 00:15:53.047 runtime=1 00:15:53.047 ioengine=libaio 00:15:53.047 direct=1 00:15:53.047 bs=4096 00:15:53.048 iodepth=1 00:15:53.048 norandommap=0 00:15:53.048 numjobs=1 00:15:53.048 00:15:53.048 verify_dump=1 00:15:53.048 verify_backlog=512 00:15:53.048 verify_state_save=0 00:15:53.048 do_verify=1 00:15:53.048 verify=crc32c-intel 00:15:53.048 [job0] 00:15:53.048 filename=/dev/nvme0n1 00:15:53.048 Could not set queue depth (nvme0n1) 00:15:53.306 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:53.306 fio-3.35 00:15:53.306 Starting 1 thread 00:15:54.690 00:15:54.690 job0: (groupid=0, jobs=1): err= 0: pid=2904493: Mon Jul 15 10:23:31 2024 00:15:54.690 read: IOPS=7994, BW=31.2MiB/s (32.7MB/s)(31.3MiB/1001msec) 00:15:54.690 slat (nsec): min=5648, max=28052, avg=6013.58, stdev=680.22 00:15:54.690 clat (usec): min=33, max=129, avg=53.25, stdev= 3.57 00:15:54.690 lat (usec): min=51, max=135, avg=59.27, stdev= 3.59 00:15:54.690 clat percentiles (usec): 00:15:54.690 | 1.00th=[ 48], 5.00th=[ 49], 10.00th=[ 49], 20.00th=[ 51], 00:15:54.690 | 30.00th=[ 51], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 55], 00:15:54.690 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 59], 95.00th=[ 60], 00:15:54.690 | 99.00th=[ 63], 99.50th=[ 64], 99.90th=[ 67], 99.95th=[ 69], 00:15:54.690 | 99.99th=[ 130] 00:15:54.690 write: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec); 0 zone resets 00:15:54.690 slat (nsec): min=7646, max=46262, avg=8383.30, stdev=945.93 00:15:54.690 clat (usec): min=33, max=149, avg=51.64, stdev= 3.68 00:15:54.690 lat (usec): min=51, max=196, avg=60.02, stdev= 3.87 00:15:54.690 clat percentiles (usec): 00:15:54.690 | 1.00th=[ 46], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 49], 00:15:54.690 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 53], 00:15:54.690 | 70.00th=[ 54], 80.00th=[ 55], 90.00th=[ 57], 95.00th=[ 58], 00:15:54.690 | 99.00th=[ 61], 99.50th=[ 62], 99.90th=[ 67], 99.95th=[ 69], 00:15:54.690 | 99.99th=[ 151] 00:15:54.690 bw ( KiB/s): min=32768, max=32768, per=100.00%, avg=32768.00, stdev= 0.00, samples=1 00:15:54.690 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:15:54.690 lat (usec) : 50=26.82%, 100=73.17%, 250=0.01% 00:15:54.690 cpu : usr=10.60%, sys=15.00%, ctx=16194, majf=0, minf=1 00:15:54.690 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.690 issued rwts: total=8002,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.690 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.690 00:15:54.690 Run status group 0 (all jobs): 00:15:54.690 READ: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=31.3MiB (32.8MB), run=1001-1001msec 00:15:54.690 WRITE: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:15:54.690 00:15:54.690 Disk stats (read/write): 00:15:54.690 nvme0n1: ios=7218/7385, merge=0/0, ticks=344/315, in_queue=659, util=90.88% 00:15:54.690 10:23:31 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:57.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:57.236 rmmod nvme_rdma 00:15:57.236 rmmod nvme_fabrics 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2903045 ']' 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2903045 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 2903045 ']' 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 2903045 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2903045 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2903045' 00:15:57.236 killing process with pid 2903045 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 2903045 00:15:57.236 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 2903045 00:15:57.497 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:57.497 10:23:34 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:57.497 00:15:57.497 real 0m18.682s 00:15:57.497 user 0m58.162s 00:15:57.497 sys 0m6.764s 00:15:57.497 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.497 10:23:34 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.497 ************************************ 00:15:57.497 END TEST nvmf_nmic 00:15:57.497 ************************************ 00:15:57.497 10:23:34 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:15:57.497 10:23:34 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:15:57.497 10:23:34 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:57.497 10:23:34 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.497 10:23:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:57.497 ************************************ 00:15:57.497 START TEST nvmf_fio_target 00:15:57.497 ************************************ 00:15:57.497 10:23:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:15:57.757 * Looking for test storage... 00:15:57.758 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:57.758 10:23:34 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:16:05.903 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:16:05.903 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:16:05.903 Found net devices under 0000:98:00.0: mlx_0_0 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:16:05.903 Found net devices under 0000:98:00.1: mlx_0_1 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:05.903 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:05.904 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:05.904 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:16:05.904 altname enp152s0f0np0 00:16:05.904 altname ens817f0np0 00:16:05.904 inet 192.168.100.8/24 scope global mlx_0_0 00:16:05.904 valid_lft forever preferred_lft forever 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:05.904 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:05.904 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:16:05.904 altname enp152s0f1np1 00:16:05.904 altname ens817f1np1 00:16:05.904 inet 192.168.100.9/24 scope global mlx_0_1 00:16:05.904 valid_lft forever preferred_lft forever 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:05.904 192.168.100.9' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:05.904 192.168.100.9' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:05.904 192.168.100.9' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2909432 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2909432 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 2909432 ']' 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:05.904 10:23:42 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.904 [2024-07-15 10:23:42.571120] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:05.904 [2024-07-15 10:23:42.571191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.904 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.904 [2024-07-15 10:23:42.643549] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.904 [2024-07-15 10:23:42.710652] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.904 [2024-07-15 10:23:42.710692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.904 [2024-07-15 10:23:42.710699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:05.904 [2024-07-15 10:23:42.710705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:05.904 [2024-07-15 10:23:42.710711] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.904 [2024-07-15 10:23:42.710852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.904 [2024-07-15 10:23:42.710970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.904 [2024-07-15 10:23:42.711127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.904 [2024-07-15 10:23:42.711128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:06.174 10:23:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:06.174 10:23:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:16:06.174 10:23:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:06.174 10:23:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:06.174 10:23:43 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.441 10:23:43 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:06.441 10:23:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:06.441 [2024-07-15 10:23:43.567868] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16ca200/0x16ce6f0) succeed. 00:16:06.441 [2024-07-15 10:23:43.582530] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16cb840/0x170fd80) succeed. 00:16:06.701 10:23:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.961 10:23:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:06.961 10:23:43 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.961 10:23:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:06.961 10:23:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:07.220 10:23:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:07.220 10:23:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:07.479 10:23:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:07.479 10:23:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:07.479 10:23:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:07.738 10:23:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:07.738 10:23:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:07.998 10:23:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:07.998 10:23:44 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:07.998 10:23:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:07.998 10:23:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:08.259 10:23:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:08.521 10:23:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:08.521 10:23:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:08.521 10:23:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:08.521 10:23:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:08.782 10:23:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:08.782 [2024-07-15 10:23:45.969173] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:09.043 10:23:45 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:09.043 10:23:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:09.303 10:23:46 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:10.686 10:23:47 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:10.686 10:23:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:16:10.686 10:23:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.686 10:23:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:16:10.686 10:23:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:16:10.686 10:23:47 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:16:12.594 10:23:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:12.594 10:23:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:12.594 10:23:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:12.854 10:23:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:16:12.854 10:23:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.854 10:23:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:16:12.854 10:23:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:12.854 [global] 00:16:12.854 thread=1 00:16:12.854 invalidate=1 00:16:12.854 rw=write 00:16:12.854 time_based=1 00:16:12.854 runtime=1 00:16:12.854 ioengine=libaio 00:16:12.854 direct=1 00:16:12.854 bs=4096 00:16:12.854 iodepth=1 00:16:12.854 norandommap=0 00:16:12.854 numjobs=1 00:16:12.854 00:16:12.854 verify_dump=1 00:16:12.854 verify_backlog=512 00:16:12.854 verify_state_save=0 00:16:12.854 do_verify=1 00:16:12.854 verify=crc32c-intel 00:16:12.854 [job0] 00:16:12.854 filename=/dev/nvme0n1 00:16:12.854 [job1] 00:16:12.854 filename=/dev/nvme0n2 00:16:12.854 [job2] 00:16:12.854 filename=/dev/nvme0n3 00:16:12.854 [job3] 00:16:12.854 filename=/dev/nvme0n4 00:16:12.854 Could not set queue depth (nvme0n1) 00:16:12.854 Could not set queue depth (nvme0n2) 00:16:12.854 Could not set queue depth (nvme0n3) 00:16:12.854 Could not set queue depth (nvme0n4) 00:16:13.170 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.170 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.170 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.170 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:13.170 fio-3.35 00:16:13.170 Starting 4 threads 00:16:14.572 00:16:14.573 job0: (groupid=0, jobs=1): err= 0: pid=2911198: Mon Jul 15 10:23:51 2024 00:16:14.573 read: IOPS=6568, BW=25.7MiB/s (26.9MB/s)(25.7MiB/1001msec) 00:16:14.573 slat (nsec): min=5694, max=99066, avg=7011.95, stdev=4473.07 00:16:14.573 clat (usec): min=2, max=409, avg=70.00, stdev=41.50 00:16:14.573 lat (usec): min=52, max=442, avg=77.02, stdev=44.87 00:16:14.573 clat percentiles (usec): 00:16:14.573 | 1.00th=[ 49], 5.00th=[ 50], 10.00th=[ 52], 20.00th=[ 54], 00:16:14.573 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 64], 00:16:14.573 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 78], 95.00th=[ 99], 00:16:14.573 | 99.00th=[ 285], 99.50th=[ 318], 99.90th=[ 371], 99.95th=[ 396], 00:16:14.573 | 99.99th=[ 408] 00:16:14.573 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:16:14.573 slat (nsec): min=7837, max=51962, avg=8700.79, stdev=1793.29 00:16:14.573 clat (usec): min=43, max=309, avg=60.98, stdev=17.14 00:16:14.573 lat (usec): min=52, max=341, avg=69.69, stdev=18.11 00:16:14.573 clat percentiles (usec): 00:16:14.573 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 52], 00:16:14.573 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 58], 60.00th=[ 62], 00:16:14.573 | 70.00th=[ 67], 80.00th=[ 70], 90.00th=[ 74], 95.00th=[ 77], 00:16:14.573 | 99.00th=[ 87], 99.50th=[ 210], 99.90th=[ 281], 99.95th=[ 285], 00:16:14.573 | 99.99th=[ 310] 00:16:14.573 bw ( KiB/s): min=32702, max=32702, per=55.09%, avg=32702.00, stdev= 0.00, samples=1 00:16:14.573 iops : min= 8175, max= 8175, avg=8175.00, stdev= 0.00, samples=1 00:16:14.573 lat (usec) : 4=0.01%, 50=8.22%, 100=89.02%, 250=1.49%, 500=1.27% 00:16:14.573 cpu : usr=7.70%, sys=14.90%, ctx=13231, majf=0, minf=1 00:16:14.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.573 issued rwts: total=6575,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.573 job1: (groupid=0, jobs=1): err= 0: pid=2911199: Mon Jul 15 10:23:51 2024 00:16:14.573 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:16:14.573 slat (nsec): min=5313, max=51841, avg=11368.19, stdev=9670.05 00:16:14.573 clat (usec): min=40, max=498, avg=127.35, stdev=95.94 00:16:14.573 lat (usec): min=52, max=516, avg=138.72, stdev=103.32 00:16:14.573 clat percentiles (usec): 00:16:14.573 | 1.00th=[ 51], 5.00th=[ 63], 10.00th=[ 65], 20.00th=[ 69], 00:16:14.573 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 76], 60.00th=[ 79], 00:16:14.573 | 70.00th=[ 110], 80.00th=[ 227], 90.00th=[ 277], 95.00th=[ 351], 00:16:14.573 | 99.00th=[ 412], 99.50th=[ 429], 99.90th=[ 469], 99.95th=[ 490], 00:16:14.573 | 99.99th=[ 498] 00:16:14.573 write: IOPS=3693, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1001msec); 0 zone resets 00:16:14.573 slat (nsec): min=7773, max=52643, avg=13704.25, stdev=9514.78 00:16:14.573 clat (usec): min=44, max=482, avg=115.10, stdev=85.09 00:16:14.573 lat (usec): min=52, max=506, avg=128.81, stdev=91.80 00:16:14.573 clat percentiles (usec): 00:16:14.573 | 1.00th=[ 51], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 65], 00:16:14.573 | 30.00th=[ 68], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 76], 00:16:14.573 | 70.00th=[ 87], 80.00th=[ 204], 90.00th=[ 260], 95.00th=[ 293], 00:16:14.573 | 99.00th=[ 383], 99.50th=[ 400], 99.90th=[ 424], 99.95th=[ 449], 00:16:14.573 | 99.99th=[ 482] 00:16:14.573 bw ( KiB/s): min= 8175, max= 8175, per=13.77%, avg=8175.00, stdev= 0.00, samples=1 00:16:14.573 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:16:14.573 lat (usec) : 50=0.77%, 100=69.92%, 250=15.85%, 500=13.46% 00:16:14.573 cpu : usr=6.50%, sys=11.70%, ctx=7281, majf=0, minf=1 00:16:14.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.573 issued rwts: total=3584,3697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.573 job2: (groupid=0, jobs=1): err= 0: pid=2911200: Mon Jul 15 10:23:51 2024 00:16:14.573 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:16:14.573 slat (nsec): min=5904, max=48811, avg=19298.59, stdev=11625.36 00:16:14.573 clat (usec): min=49, max=503, avg=215.05, stdev=96.44 00:16:14.573 lat (usec): min=61, max=532, avg=234.35, stdev=102.83 00:16:14.573 clat percentiles (usec): 00:16:14.573 | 1.00th=[ 74], 5.00th=[ 85], 10.00th=[ 95], 20.00th=[ 117], 00:16:14.573 | 30.00th=[ 135], 40.00th=[ 192], 50.00th=[ 223], 60.00th=[ 239], 00:16:14.573 | 70.00th=[ 260], 80.00th=[ 289], 90.00th=[ 363], 95.00th=[ 388], 00:16:14.573 | 99.00th=[ 449], 99.50th=[ 461], 99.90th=[ 490], 99.95th=[ 490], 00:16:14.573 | 99.99th=[ 502] 00:16:14.573 write: IOPS=2242, BW=8971KiB/s (9186kB/s)(8980KiB/1001msec); 0 zone resets 00:16:14.573 slat (nsec): min=8216, max=53497, avg=21328.29, stdev=12479.25 00:16:14.573 clat (usec): min=50, max=509, avg=199.86, stdev=89.62 00:16:14.573 lat (usec): min=66, max=518, avg=221.19, stdev=96.36 00:16:14.573 clat percentiles (usec): 00:16:14.573 | 1.00th=[ 70], 5.00th=[ 81], 10.00th=[ 88], 20.00th=[ 106], 00:16:14.573 | 30.00th=[ 123], 40.00th=[ 153], 50.00th=[ 212], 60.00th=[ 237], 00:16:14.573 | 70.00th=[ 251], 80.00th=[ 273], 90.00th=[ 314], 95.00th=[ 359], 00:16:14.573 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 474], 99.95th=[ 478], 00:16:14.573 | 99.99th=[ 510] 00:16:14.573 bw ( KiB/s): min= 8175, max= 8175, per=13.77%, avg=8175.00, stdev= 0.00, samples=1 00:16:14.573 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:16:14.573 lat (usec) : 50=0.02%, 100=14.65%, 250=52.81%, 500=32.47%, 750=0.05% 00:16:14.573 cpu : usr=6.20%, sys=11.70%, ctx=4293, majf=0, minf=1 00:16:14.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.573 issued rwts: total=2048,2245,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.573 job3: (groupid=0, jobs=1): err= 0: pid=2911201: Mon Jul 15 10:23:51 2024 00:16:14.573 read: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec) 00:16:14.573 slat (nsec): min=5817, max=49391, avg=19289.54, stdev=11648.00 00:16:14.573 clat (usec): min=69, max=502, avg=213.73, stdev=97.28 00:16:14.573 lat (usec): min=75, max=533, avg=233.02, stdev=103.98 00:16:14.573 clat percentiles (usec): 00:16:14.573 | 1.00th=[ 76], 5.00th=[ 85], 10.00th=[ 94], 20.00th=[ 115], 00:16:14.573 | 30.00th=[ 133], 40.00th=[ 188], 50.00th=[ 225], 60.00th=[ 239], 00:16:14.573 | 70.00th=[ 258], 80.00th=[ 285], 90.00th=[ 359], 95.00th=[ 396], 00:16:14.573 | 99.00th=[ 449], 99.50th=[ 461], 99.90th=[ 486], 99.95th=[ 498], 00:16:14.573 | 99.99th=[ 502] 00:16:14.573 write: IOPS=2258, BW=9032KiB/s (9249kB/s)(9032KiB/1000msec); 0 zone resets 00:16:14.573 slat (nsec): min=8164, max=64942, avg=21395.03, stdev=12781.73 00:16:14.573 clat (usec): min=67, max=479, avg=199.72, stdev=95.48 00:16:14.573 lat (usec): min=75, max=512, avg=221.12, stdev=102.93 00:16:14.573 clat percentiles (usec): 00:16:14.573 | 1.00th=[ 74], 5.00th=[ 79], 10.00th=[ 84], 20.00th=[ 98], 00:16:14.573 | 30.00th=[ 117], 40.00th=[ 145], 50.00th=[ 212], 60.00th=[ 239], 00:16:14.573 | 70.00th=[ 258], 80.00th=[ 277], 90.00th=[ 330], 95.00th=[ 375], 00:16:14.573 | 99.00th=[ 420], 99.50th=[ 429], 99.90th=[ 449], 99.95th=[ 449], 00:16:14.573 | 99.99th=[ 482] 00:16:14.573 bw ( KiB/s): min= 8175, max= 8175, per=13.77%, avg=8175.00, stdev= 0.00, samples=1 00:16:14.573 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:16:14.573 lat (usec) : 100=16.67%, 250=49.72%, 500=33.58%, 750=0.02% 00:16:14.573 cpu : usr=5.60%, sys=12.40%, ctx=4306, majf=0, minf=1 00:16:14.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:14.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:14.573 issued rwts: total=2048,2258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:14.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:14.573 00:16:14.573 Run status group 0 (all jobs): 00:16:14.573 READ: bw=55.6MiB/s (58.3MB/s), 8184KiB/s-25.7MiB/s (8380kB/s-26.9MB/s), io=55.7MiB (58.4MB), run=1000-1001msec 00:16:14.573 WRITE: bw=58.0MiB/s (60.8MB/s), 8971KiB/s-26.0MiB/s (9186kB/s-27.2MB/s), io=58.0MiB (60.8MB), run=1000-1001msec 00:16:14.573 00:16:14.573 Disk stats (read/write): 00:16:14.573 nvme0n1: ios=5682/6108, merge=0/0, ticks=329/305, in_queue=634, util=85.77% 00:16:14.573 nvme0n2: ios=2673/3072, merge=0/0, ticks=263/274, in_queue=537, util=86.15% 00:16:14.573 nvme0n3: ios=1536/1758, merge=0/0, ticks=249/259, in_queue=508, util=88.68% 00:16:14.573 nvme0n4: ios=1536/1691, merge=0/0, ticks=258/261, in_queue=519, util=89.52% 00:16:14.573 10:23:51 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:14.573 [global] 00:16:14.573 thread=1 00:16:14.573 invalidate=1 00:16:14.573 rw=randwrite 00:16:14.573 time_based=1 00:16:14.573 runtime=1 00:16:14.573 ioengine=libaio 00:16:14.573 direct=1 00:16:14.573 bs=4096 00:16:14.573 iodepth=1 00:16:14.573 norandommap=0 00:16:14.573 numjobs=1 00:16:14.573 00:16:14.573 verify_dump=1 00:16:14.573 verify_backlog=512 00:16:14.573 verify_state_save=0 00:16:14.573 do_verify=1 00:16:14.573 verify=crc32c-intel 00:16:14.573 [job0] 00:16:14.573 filename=/dev/nvme0n1 00:16:14.573 [job1] 00:16:14.573 filename=/dev/nvme0n2 00:16:14.573 [job2] 00:16:14.573 filename=/dev/nvme0n3 00:16:14.573 [job3] 00:16:14.573 filename=/dev/nvme0n4 00:16:14.573 Could not set queue depth (nvme0n1) 00:16:14.573 Could not set queue depth (nvme0n2) 00:16:14.573 Could not set queue depth (nvme0n3) 00:16:14.573 Could not set queue depth (nvme0n4) 00:16:14.848 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.848 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.848 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.848 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:14.848 fio-3.35 00:16:14.848 Starting 4 threads 00:16:16.259 00:16:16.259 job0: (groupid=0, jobs=1): err= 0: pid=2911727: Mon Jul 15 10:23:53 2024 00:16:16.259 read: IOPS=2963, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec) 00:16:16.259 slat (nsec): min=5489, max=55239, avg=13481.15, stdev=10702.99 00:16:16.259 clat (usec): min=46, max=448, avg=140.95, stdev=73.59 00:16:16.259 lat (usec): min=52, max=479, avg=154.43, stdev=81.07 00:16:16.259 clat percentiles (usec): 00:16:16.259 | 1.00th=[ 52], 5.00th=[ 66], 10.00th=[ 75], 20.00th=[ 90], 00:16:16.259 | 30.00th=[ 99], 40.00th=[ 109], 50.00th=[ 115], 60.00th=[ 120], 00:16:16.259 | 70.00th=[ 129], 80.00th=[ 219], 90.00th=[ 255], 95.00th=[ 285], 00:16:16.259 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 429], 99.95th=[ 445], 00:16:16.259 | 99.99th=[ 449] 00:16:16.259 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:16.259 slat (nsec): min=7807, max=69989, avg=16978.33, stdev=11354.85 00:16:16.259 clat (usec): min=50, max=469, avg=151.12, stdev=77.65 00:16:16.259 lat (usec): min=58, max=488, avg=168.10, stdev=85.02 00:16:16.259 clat percentiles (usec): 00:16:16.259 | 1.00th=[ 61], 5.00th=[ 70], 10.00th=[ 81], 20.00th=[ 93], 00:16:16.259 | 30.00th=[ 102], 40.00th=[ 111], 50.00th=[ 116], 60.00th=[ 122], 00:16:16.259 | 70.00th=[ 196], 80.00th=[ 233], 90.00th=[ 265], 95.00th=[ 297], 00:16:16.259 | 99.00th=[ 379], 99.50th=[ 396], 99.90th=[ 433], 99.95th=[ 453], 00:16:16.259 | 99.99th=[ 469] 00:16:16.259 bw ( KiB/s): min=13400, max=13400, per=25.59%, avg=13400.00, stdev= 0.00, samples=1 00:16:16.259 iops : min= 3350, max= 3350, avg=3350.00, stdev= 0.00, samples=1 00:16:16.259 lat (usec) : 50=0.20%, 100=29.33%, 250=57.78%, 500=12.69% 00:16:16.259 cpu : usr=6.20%, sys=13.30%, ctx=6040, majf=0, minf=1 00:16:16.259 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.259 issued rwts: total=2966,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.259 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.259 job1: (groupid=0, jobs=1): err= 0: pid=2911728: Mon Jul 15 10:23:53 2024 00:16:16.259 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:16.259 slat (nsec): min=5754, max=49139, avg=15276.79, stdev=11441.54 00:16:16.259 clat (usec): min=48, max=476, avg=161.64, stdev=78.43 00:16:16.259 lat (usec): min=54, max=482, avg=176.91, stdev=85.14 00:16:16.259 clat percentiles (usec): 00:16:16.259 | 1.00th=[ 56], 5.00th=[ 71], 10.00th=[ 84], 20.00th=[ 98], 00:16:16.259 | 30.00th=[ 111], 40.00th=[ 117], 50.00th=[ 122], 60.00th=[ 149], 00:16:16.259 | 70.00th=[ 219], 80.00th=[ 237], 90.00th=[ 265], 95.00th=[ 293], 00:16:16.259 | 99.00th=[ 388], 99.50th=[ 400], 99.90th=[ 437], 99.95th=[ 437], 00:16:16.259 | 99.99th=[ 478] 00:16:16.259 write: IOPS=3011, BW=11.8MiB/s (12.3MB/s)(11.8MiB/1001msec); 0 zone resets 00:16:16.259 slat (nsec): min=7783, max=65861, avg=17848.81, stdev=11852.60 00:16:16.259 clat (usec): min=45, max=470, avg=155.28, stdev=77.31 00:16:16.259 lat (usec): min=53, max=502, avg=173.13, stdev=84.86 00:16:16.259 clat percentiles (usec): 00:16:16.259 | 1.00th=[ 51], 5.00th=[ 65], 10.00th=[ 73], 20.00th=[ 91], 00:16:16.259 | 30.00th=[ 102], 40.00th=[ 114], 50.00th=[ 119], 60.00th=[ 137], 00:16:16.259 | 70.00th=[ 215], 80.00th=[ 237], 90.00th=[ 262], 95.00th=[ 285], 00:16:16.259 | 99.00th=[ 355], 99.50th=[ 379], 99.90th=[ 441], 99.95th=[ 449], 00:16:16.259 | 99.99th=[ 469] 00:16:16.259 bw ( KiB/s): min=12288, max=12288, per=23.46%, avg=12288.00, stdev= 0.00, samples=1 00:16:16.259 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:16.259 lat (usec) : 50=0.61%, 100=24.32%, 250=60.36%, 500=14.71% 00:16:16.259 cpu : usr=6.80%, sys=12.50%, ctx=5576, majf=0, minf=1 00:16:16.259 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.259 issued rwts: total=2560,3015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.259 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.259 job2: (groupid=0, jobs=1): err= 0: pid=2911729: Mon Jul 15 10:23:53 2024 00:16:16.259 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:16.259 slat (nsec): min=5463, max=54990, avg=12317.05, stdev=9827.21 00:16:16.259 clat (usec): min=54, max=506, avg=135.48, stdev=69.10 00:16:16.259 lat (usec): min=60, max=536, avg=147.80, stdev=75.56 00:16:16.259 clat percentiles (usec): 00:16:16.259 | 1.00th=[ 62], 5.00th=[ 73], 10.00th=[ 80], 20.00th=[ 91], 00:16:16.260 | 30.00th=[ 98], 40.00th=[ 106], 50.00th=[ 114], 60.00th=[ 118], 00:16:16.260 | 70.00th=[ 123], 80.00th=[ 190], 90.00th=[ 251], 95.00th=[ 277], 00:16:16.260 | 99.00th=[ 367], 99.50th=[ 396], 99.90th=[ 441], 99.95th=[ 478], 00:16:16.260 | 99.99th=[ 506] 00:16:16.260 write: IOPS=3511, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1001msec); 0 zone resets 00:16:16.260 slat (nsec): min=7756, max=57817, avg=14658.58, stdev=10235.88 00:16:16.260 clat (usec): min=51, max=456, avg=133.66, stdev=78.09 00:16:16.260 lat (usec): min=60, max=480, avg=148.32, stdev=85.12 00:16:16.260 clat percentiles (usec): 00:16:16.260 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 64], 20.00th=[ 80], 00:16:16.260 | 30.00th=[ 91], 40.00th=[ 100], 50.00th=[ 109], 60.00th=[ 115], 00:16:16.260 | 70.00th=[ 121], 80.00th=[ 206], 90.00th=[ 260], 95.00th=[ 297], 00:16:16.260 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 445], 99.95th=[ 453], 00:16:16.260 | 99.99th=[ 457] 00:16:16.260 bw ( KiB/s): min=15984, max=15984, per=30.52%, avg=15984.00, stdev= 0.00, samples=1 00:16:16.260 iops : min= 3996, max= 3996, avg=3996.00, stdev= 0.00, samples=1 00:16:16.260 lat (usec) : 100=36.50%, 250=52.32%, 500=11.17%, 750=0.02% 00:16:16.260 cpu : usr=7.00%, sys=11.80%, ctx=6587, majf=0, minf=1 00:16:16.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.260 issued rwts: total=3072,3515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.260 job3: (groupid=0, jobs=1): err= 0: pid=2911730: Mon Jul 15 10:23:53 2024 00:16:16.260 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:16.260 slat (nsec): min=5488, max=49511, avg=12119.97, stdev=9872.45 00:16:16.260 clat (usec): min=51, max=549, avg=138.58, stdev=72.98 00:16:16.260 lat (usec): min=57, max=556, avg=150.70, stdev=78.99 00:16:16.260 clat percentiles (usec): 00:16:16.260 | 1.00th=[ 57], 5.00th=[ 64], 10.00th=[ 73], 20.00th=[ 87], 00:16:16.260 | 30.00th=[ 97], 40.00th=[ 105], 50.00th=[ 113], 60.00th=[ 119], 00:16:16.260 | 70.00th=[ 127], 80.00th=[ 221], 90.00th=[ 251], 95.00th=[ 273], 00:16:16.260 | 99.00th=[ 363], 99.50th=[ 400], 99.90th=[ 445], 99.95th=[ 478], 00:16:16.260 | 99.99th=[ 553] 00:16:16.260 write: IOPS=3500, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1001msec); 0 zone resets 00:16:16.260 slat (nsec): min=7775, max=61845, avg=14618.44, stdev=10275.17 00:16:16.260 clat (usec): min=45, max=459, avg=131.80, stdev=73.92 00:16:16.260 lat (usec): min=56, max=477, avg=146.42, stdev=81.24 00:16:16.260 clat percentiles (usec): 00:16:16.260 | 1.00th=[ 53], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 72], 00:16:16.260 | 30.00th=[ 89], 40.00th=[ 101], 50.00th=[ 111], 60.00th=[ 117], 00:16:16.260 | 70.00th=[ 124], 80.00th=[ 212], 90.00th=[ 251], 95.00th=[ 277], 00:16:16.260 | 99.00th=[ 359], 99.50th=[ 379], 99.90th=[ 412], 99.95th=[ 445], 00:16:16.260 | 99.99th=[ 461] 00:16:16.260 bw ( KiB/s): min=16384, max=16384, per=31.28%, avg=16384.00, stdev= 0.00, samples=1 00:16:16.260 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:16:16.260 lat (usec) : 50=0.03%, 100=36.42%, 250=53.60%, 500=9.93%, 750=0.02% 00:16:16.260 cpu : usr=6.90%, sys=11.20%, ctx=6577, majf=0, minf=1 00:16:16.260 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.260 issued rwts: total=3072,3504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.260 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.260 00:16:16.260 Run status group 0 (all jobs): 00:16:16.260 READ: bw=45.5MiB/s (47.8MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=45.6MiB (47.8MB), run=1001-1001msec 00:16:16.260 WRITE: bw=51.1MiB/s (53.6MB/s), 11.8MiB/s-13.7MiB/s (12.3MB/s-14.4MB/s), io=51.2MiB (53.7MB), run=1001-1001msec 00:16:16.260 00:16:16.260 Disk stats (read/write): 00:16:16.260 nvme0n1: ios=2556/2560, merge=0/0, ticks=304/284, in_queue=588, util=86.17% 00:16:16.260 nvme0n2: ios=2202/2560, merge=0/0, ticks=240/272, in_queue=512, util=86.28% 00:16:16.260 nvme0n3: ios=2560/2819, merge=0/0, ticks=293/314, in_queue=607, util=88.71% 00:16:16.260 nvme0n4: ios=2560/2968, merge=0/0, ticks=285/285, in_queue=570, util=89.65% 00:16:16.260 10:23:53 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:16.260 [global] 00:16:16.260 thread=1 00:16:16.260 invalidate=1 00:16:16.260 rw=write 00:16:16.260 time_based=1 00:16:16.260 runtime=1 00:16:16.260 ioengine=libaio 00:16:16.260 direct=1 00:16:16.260 bs=4096 00:16:16.260 iodepth=128 00:16:16.260 norandommap=0 00:16:16.260 numjobs=1 00:16:16.260 00:16:16.260 verify_dump=1 00:16:16.260 verify_backlog=512 00:16:16.260 verify_state_save=0 00:16:16.260 do_verify=1 00:16:16.260 verify=crc32c-intel 00:16:16.260 [job0] 00:16:16.260 filename=/dev/nvme0n1 00:16:16.260 [job1] 00:16:16.260 filename=/dev/nvme0n2 00:16:16.260 [job2] 00:16:16.260 filename=/dev/nvme0n3 00:16:16.260 [job3] 00:16:16.260 filename=/dev/nvme0n4 00:16:16.260 Could not set queue depth (nvme0n1) 00:16:16.260 Could not set queue depth (nvme0n2) 00:16:16.260 Could not set queue depth (nvme0n3) 00:16:16.260 Could not set queue depth (nvme0n4) 00:16:16.531 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:16.531 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:16.531 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:16.531 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:16.531 fio-3.35 00:16:16.531 Starting 4 threads 00:16:17.532 00:16:17.532 job0: (groupid=0, jobs=1): err= 0: pid=2912252: Mon Jul 15 10:23:54 2024 00:16:17.532 read: IOPS=12.2k, BW=47.5MiB/s (49.8MB/s)(47.6MiB/1002msec) 00:16:17.532 slat (nsec): min=1173, max=1486.1k, avg=40277.70, stdev=143957.73 00:16:17.532 clat (usec): min=1269, max=11994, avg=5303.00, stdev=1713.21 00:16:17.532 lat (usec): min=1888, max=11996, avg=5343.28, stdev=1725.33 00:16:17.532 clat percentiles (usec): 00:16:17.533 | 1.00th=[ 3294], 5.00th=[ 3687], 10.00th=[ 3916], 20.00th=[ 4293], 00:16:17.533 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4752], 60.00th=[ 4883], 00:16:17.533 | 70.00th=[ 5014], 80.00th=[ 5342], 90.00th=[ 8586], 95.00th=[ 8979], 00:16:17.533 | 99.00th=[10814], 99.50th=[10945], 99.90th=[11076], 99.95th=[11994], 00:16:17.533 | 99.99th=[11994] 00:16:17.533 write: IOPS=12.3k, BW=47.9MiB/s (50.2MB/s)(48.0MiB/1002msec); 0 zone resets 00:16:17.533 slat (nsec): min=1654, max=2790.3k, avg=39010.83, stdev=139365.00 00:16:17.533 clat (usec): min=2762, max=11992, avg=5081.61, stdev=1794.70 00:16:17.533 lat (usec): min=2770, max=13171, avg=5120.62, stdev=1808.67 00:16:17.533 clat percentiles (usec): 00:16:17.533 | 1.00th=[ 3064], 5.00th=[ 3458], 10.00th=[ 3687], 20.00th=[ 4047], 00:16:17.533 | 30.00th=[ 4228], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4621], 00:16:17.533 | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 8455], 95.00th=[ 8979], 00:16:17.533 | 99.00th=[10683], 99.50th=[10683], 99.90th=[11469], 99.95th=[11994], 00:16:17.533 | 99.99th=[11994] 00:16:17.533 bw ( KiB/s): min=49152, max=49152, per=35.69%, avg=49152.00, stdev= 0.00, samples=2 00:16:17.533 iops : min=12288, max=12288, avg=12288.00, stdev= 0.00, samples=2 00:16:17.533 lat (msec) : 2=0.07%, 4=14.69%, 10=82.11%, 20=3.13% 00:16:17.533 cpu : usr=5.49%, sys=6.89%, ctx=1916, majf=0, minf=1 00:16:17.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:16:17.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.533 issued rwts: total=12184,12288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.533 job1: (groupid=0, jobs=1): err= 0: pid=2912253: Mon Jul 15 10:23:54 2024 00:16:17.533 read: IOPS=7156, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:16:17.533 slat (nsec): min=1196, max=3142.3k, avg=70147.28, stdev=310428.86 00:16:17.533 clat (usec): min=304, max=17229, avg=8946.37, stdev=3420.94 00:16:17.533 lat (usec): min=881, max=17239, avg=9016.51, stdev=3434.98 00:16:17.533 clat percentiles (usec): 00:16:17.533 | 1.00th=[ 3687], 5.00th=[ 4490], 10.00th=[ 4621], 20.00th=[ 4883], 00:16:17.533 | 30.00th=[ 5145], 40.00th=[10159], 50.00th=[10552], 60.00th=[10683], 00:16:17.533 | 70.00th=[10814], 80.00th=[11076], 90.00th=[12256], 95.00th=[13960], 00:16:17.533 | 99.00th=[16909], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:16:17.533 | 99.99th=[17171] 00:16:17.533 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:16:17.533 slat (nsec): min=1665, max=3041.2k, avg=67087.31, stdev=293897.56 00:16:17.533 clat (usec): min=3752, max=16494, avg=8682.01, stdev=3320.18 00:16:17.533 lat (usec): min=3755, max=16501, avg=8749.10, stdev=3332.79 00:16:17.533 clat percentiles (usec): 00:16:17.533 | 1.00th=[ 4113], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4686], 00:16:17.533 | 30.00th=[ 5014], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10159], 00:16:17.533 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11600], 95.00th=[15533], 00:16:17.533 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16450], 99.95th=[16450], 00:16:17.533 | 99.99th=[16450] 00:16:17.533 bw ( KiB/s): min=22536, max=22536, per=16.36%, avg=22536.00, stdev= 0.00, samples=1 00:16:17.533 iops : min= 5634, max= 5634, avg=5634.00, stdev= 0.00, samples=1 00:16:17.533 lat (usec) : 500=0.01%, 1000=0.09% 00:16:17.533 lat (msec) : 2=0.13%, 4=0.65%, 10=41.42%, 20=57.70% 00:16:17.533 cpu : usr=2.70%, sys=4.20%, ctx=975, majf=0, minf=1 00:16:17.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:16:17.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.533 issued rwts: total=7164,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.533 job2: (groupid=0, jobs=1): err= 0: pid=2912254: Mon Jul 15 10:23:54 2024 00:16:17.533 read: IOPS=8949, BW=35.0MiB/s (36.7MB/s)(35.0MiB/1002msec) 00:16:17.533 slat (nsec): min=1177, max=1562.6k, avg=54088.76, stdev=189403.83 00:16:17.533 clat (usec): min=690, max=16881, avg=6997.16, stdev=2402.24 00:16:17.533 lat (usec): min=1512, max=16891, avg=7051.24, stdev=2419.71 00:16:17.533 clat percentiles (usec): 00:16:17.533 | 1.00th=[ 4621], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:16:17.533 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6194], 00:16:17.533 | 70.00th=[ 6980], 80.00th=[ 8094], 90.00th=[10814], 95.00th=[11731], 00:16:17.533 | 99.00th=[16712], 99.50th=[16909], 99.90th=[16909], 99.95th=[16909], 00:16:17.533 | 99.99th=[16909] 00:16:17.533 write: IOPS=9197, BW=35.9MiB/s (37.7MB/s)(36.0MiB/1002msec); 0 zone resets 00:16:17.533 slat (nsec): min=1686, max=2470.8k, avg=53242.38, stdev=186047.77 00:16:17.533 clat (usec): min=3958, max=16460, avg=6914.16, stdev=2568.09 00:16:17.533 lat (usec): min=3966, max=16462, avg=6967.40, stdev=2585.42 00:16:17.533 clat percentiles (usec): 00:16:17.533 | 1.00th=[ 4817], 5.00th=[ 5014], 10.00th=[ 5211], 20.00th=[ 5473], 00:16:17.533 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5800], 60.00th=[ 5997], 00:16:17.533 | 70.00th=[ 7308], 80.00th=[ 7898], 90.00th=[10945], 95.00th=[14615], 00:16:17.533 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16319], 99.95th=[16450], 00:16:17.533 | 99.99th=[16450] 00:16:17.533 bw ( KiB/s): min=33450, max=40344, per=26.79%, avg=36897.00, stdev=4874.79, samples=2 00:16:17.533 iops : min= 8362, max=10086, avg=9224.00, stdev=1219.05, samples=2 00:16:17.533 lat (usec) : 750=0.01% 00:16:17.533 lat (msec) : 2=0.08%, 4=0.30%, 10=88.52%, 20=11.10% 00:16:17.533 cpu : usr=3.00%, sys=4.60%, ctx=1890, majf=0, minf=1 00:16:17.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:17.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.533 issued rwts: total=8967,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.533 job3: (groupid=0, jobs=1): err= 0: pid=2912255: Mon Jul 15 10:23:54 2024 00:16:17.533 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:16:17.533 slat (nsec): min=1225, max=2511.8k, avg=87896.23, stdev=307803.54 00:16:17.533 clat (usec): min=8628, max=17101, avg=11287.58, stdev=1510.16 00:16:17.533 lat (usec): min=9071, max=17714, avg=11375.48, stdev=1493.61 00:16:17.533 clat percentiles (usec): 00:16:17.533 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:16:17.533 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:16:17.533 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12387], 95.00th=[16188], 00:16:17.533 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:16:17.533 | 99.99th=[17171] 00:16:17.533 write: IOPS=5819, BW=22.7MiB/s (23.8MB/s)(22.8MiB/1002msec); 0 zone resets 00:16:17.533 slat (nsec): min=1710, max=2346.8k, avg=84179.04, stdev=296500.67 00:16:17.533 clat (usec): min=1117, max=16510, avg=10813.93, stdev=1705.45 00:16:17.533 lat (usec): min=1766, max=16512, avg=10898.11, stdev=1688.96 00:16:17.533 clat percentiles (usec): 00:16:17.533 | 1.00th=[ 6390], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10028], 00:16:17.533 | 30.00th=[10159], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:16:17.533 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12125], 95.00th=[15533], 00:16:17.533 | 99.00th=[15926], 99.50th=[16188], 99.90th=[16450], 99.95th=[16450], 00:16:17.533 | 99.99th=[16450] 00:16:17.533 bw ( KiB/s): min=22712, max=22920, per=16.56%, avg=22816.00, stdev=147.08, samples=2 00:16:17.533 iops : min= 5678, max= 5730, avg=5704.00, stdev=36.77, samples=2 00:16:17.533 lat (msec) : 2=0.02%, 4=0.27%, 10=11.61%, 20=88.10% 00:16:17.533 cpu : usr=3.20%, sys=3.10%, ctx=1371, majf=0, minf=1 00:16:17.533 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:17.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:17.533 issued rwts: total=5632,5831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.533 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:17.533 00:16:17.533 Run status group 0 (all jobs): 00:16:17.533 READ: bw=132MiB/s (139MB/s), 22.0MiB/s-47.5MiB/s (23.0MB/s-49.8MB/s), io=133MiB (139MB), run=1001-1002msec 00:16:17.533 WRITE: bw=135MiB/s (141MB/s), 22.7MiB/s-47.9MiB/s (23.8MB/s-50.2MB/s), io=135MiB (141MB), run=1001-1002msec 00:16:17.533 00:16:17.533 Disk stats (read/write): 00:16:17.533 nvme0n1: ios=10731/10752, merge=0/0, ticks=16943/15949, in_queue=32892, util=86.07% 00:16:17.533 nvme0n2: ios=5087/5120, merge=0/0, ticks=13086/12588, in_queue=25674, util=86.18% 00:16:17.533 nvme0n3: ios=7680/7922, merge=0/0, ticks=13982/14219, in_queue=28201, util=88.61% 00:16:17.533 nvme0n4: ios=4608/5061, merge=0/0, ticks=12759/13287, in_queue=26046, util=89.55% 00:16:17.533 10:23:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:17.800 [global] 00:16:17.800 thread=1 00:16:17.800 invalidate=1 00:16:17.800 rw=randwrite 00:16:17.800 time_based=1 00:16:17.800 runtime=1 00:16:17.800 ioengine=libaio 00:16:17.800 direct=1 00:16:17.800 bs=4096 00:16:17.800 iodepth=128 00:16:17.800 norandommap=0 00:16:17.800 numjobs=1 00:16:17.800 00:16:17.800 verify_dump=1 00:16:17.800 verify_backlog=512 00:16:17.800 verify_state_save=0 00:16:17.800 do_verify=1 00:16:17.800 verify=crc32c-intel 00:16:17.800 [job0] 00:16:17.800 filename=/dev/nvme0n1 00:16:17.800 [job1] 00:16:17.800 filename=/dev/nvme0n2 00:16:17.800 [job2] 00:16:17.800 filename=/dev/nvme0n3 00:16:17.800 [job3] 00:16:17.800 filename=/dev/nvme0n4 00:16:17.800 Could not set queue depth (nvme0n1) 00:16:17.800 Could not set queue depth (nvme0n2) 00:16:17.800 Could not set queue depth (nvme0n3) 00:16:17.800 Could not set queue depth (nvme0n4) 00:16:18.061 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.061 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.061 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.061 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.061 fio-3.35 00:16:18.061 Starting 4 threads 00:16:19.484 00:16:19.484 job0: (groupid=0, jobs=1): err= 0: pid=2912781: Mon Jul 15 10:23:56 2024 00:16:19.484 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:16:19.484 slat (nsec): min=1157, max=2750.3k, avg=85185.90, stdev=381619.98 00:16:19.484 clat (usec): min=5198, max=14454, avg=11040.46, stdev=2480.84 00:16:19.484 lat (usec): min=5201, max=14542, avg=11125.65, stdev=2473.28 00:16:19.484 clat percentiles (usec): 00:16:19.484 | 1.00th=[ 5473], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[10028], 00:16:19.484 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12125], 60.00th=[12256], 00:16:19.484 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:16:19.484 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14353], 99.95th=[14484], 00:16:19.484 | 99.99th=[14484] 00:16:19.484 write: IOPS=5899, BW=23.0MiB/s (24.2MB/s)(23.1MiB/1003msec); 0 zone resets 00:16:19.484 slat (nsec): min=1628, max=5269.8k, avg=84915.54, stdev=385092.60 00:16:19.484 clat (usec): min=1118, max=15509, avg=10996.41, stdev=2835.55 00:16:19.484 lat (usec): min=1127, max=15512, avg=11081.33, stdev=2833.47 00:16:19.484 clat percentiles (usec): 00:16:19.484 | 1.00th=[ 3294], 5.00th=[ 5276], 10.00th=[ 5735], 20.00th=[ 9503], 00:16:19.484 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12387], 00:16:19.484 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12780], 95.00th=[12780], 00:16:19.484 | 99.00th=[13173], 99.50th=[14615], 99.90th=[15139], 99.95th=[15139], 00:16:19.484 | 99.99th=[15533] 00:16:19.484 bw ( KiB/s): min=20720, max=25548, per=19.74%, avg=23134.00, stdev=3413.91, samples=2 00:16:19.484 iops : min= 5180, max= 6387, avg=5783.50, stdev=853.48, samples=2 00:16:19.485 lat (msec) : 2=0.16%, 4=0.68%, 10=19.75%, 20=79.40% 00:16:19.485 cpu : usr=2.40%, sys=4.29%, ctx=2339, majf=0, minf=1 00:16:19.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:19.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.485 issued rwts: total=5632,5917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.485 job1: (groupid=0, jobs=1): err= 0: pid=2912782: Mon Jul 15 10:23:56 2024 00:16:19.485 read: IOPS=4754, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1003msec) 00:16:19.485 slat (nsec): min=1195, max=2974.0k, avg=97665.56, stdev=331382.11 00:16:19.485 clat (usec): min=2069, max=20335, avg=12371.45, stdev=1123.98 00:16:19.485 lat (usec): min=2453, max=21509, avg=12469.11, stdev=1085.11 00:16:19.485 clat percentiles (usec): 00:16:19.485 | 1.00th=[ 7832], 5.00th=[11469], 10.00th=[11994], 20.00th=[12125], 00:16:19.485 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:16:19.485 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12780], 95.00th=[12780], 00:16:19.485 | 99.00th=[16581], 99.50th=[19006], 99.90th=[20317], 99.95th=[20317], 00:16:19.485 | 99.99th=[20317] 00:16:19.485 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:16:19.485 slat (nsec): min=1626, max=3575.6k, avg=101524.27, stdev=350450.87 00:16:19.485 clat (usec): min=9770, max=24114, avg=13201.15, stdev=3115.37 00:16:19.485 lat (usec): min=11469, max=24123, avg=13302.67, stdev=3117.84 00:16:19.485 clat percentiles (usec): 00:16:19.485 | 1.00th=[10421], 5.00th=[11600], 10.00th=[11863], 20.00th=[11994], 00:16:19.485 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:16:19.485 | 70.00th=[12387], 80.00th=[12518], 90.00th=[14091], 95.00th=[22676], 00:16:19.485 | 99.00th=[23462], 99.50th=[23462], 99.90th=[23725], 99.95th=[23725], 00:16:19.485 | 99.99th=[23987] 00:16:19.485 bw ( KiB/s): min=20439, max=20480, per=17.46%, avg=20459.50, stdev=28.99, samples=2 00:16:19.485 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:16:19.485 lat (msec) : 4=0.02%, 10=0.74%, 20=94.40%, 50=4.84% 00:16:19.485 cpu : usr=2.10%, sys=4.69%, ctx=2593, majf=0, minf=2 00:16:19.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:19.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.485 issued rwts: total=4769,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.485 job2: (groupid=0, jobs=1): err= 0: pid=2912783: Mon Jul 15 10:23:56 2024 00:16:19.485 read: IOPS=12.8k, BW=49.9MiB/s (52.3MB/s)(50.0MiB/1002msec) 00:16:19.485 slat (nsec): min=1185, max=1220.9k, avg=37957.12, stdev=141638.19 00:16:19.485 clat (usec): min=3642, max=6984, avg=4963.27, stdev=529.92 00:16:19.485 lat (usec): min=3660, max=6992, avg=5001.22, stdev=533.34 00:16:19.485 clat percentiles (usec): 00:16:19.485 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:16:19.485 | 30.00th=[ 4686], 40.00th=[ 4752], 50.00th=[ 4883], 60.00th=[ 4948], 00:16:19.485 | 70.00th=[ 5145], 80.00th=[ 5342], 90.00th=[ 5735], 95.00th=[ 5997], 00:16:19.485 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 6783], 99.95th=[ 6915], 00:16:19.485 | 99.99th=[ 6915] 00:16:19.485 write: IOPS=13.2k, BW=51.6MiB/s (54.1MB/s)(51.7MiB/1002msec); 0 zone resets 00:16:19.485 slat (nsec): min=1658, max=2549.2k, avg=36336.31, stdev=137200.06 00:16:19.485 clat (usec): min=1124, max=7441, avg=4784.43, stdev=609.57 00:16:19.485 lat (usec): min=1126, max=7950, avg=4820.77, stdev=612.79 00:16:19.485 clat percentiles (usec): 00:16:19.485 | 1.00th=[ 3785], 5.00th=[ 4080], 10.00th=[ 4228], 20.00th=[ 4359], 00:16:19.485 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4752], 00:16:19.485 | 70.00th=[ 4883], 80.00th=[ 5211], 90.00th=[ 5669], 95.00th=[ 5932], 00:16:19.485 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7177], 99.95th=[ 7439], 00:16:19.485 | 99.99th=[ 7439] 00:16:19.485 bw ( KiB/s): min=49916, max=49916, per=42.59%, avg=49916.00, stdev= 0.00, samples=1 00:16:19.485 iops : min=12479, max=12479, avg=12479.00, stdev= 0.00, samples=1 00:16:19.485 lat (msec) : 2=0.09%, 4=2.28%, 10=97.63% 00:16:19.485 cpu : usr=5.49%, sys=7.19%, ctx=1741, majf=0, minf=1 00:16:19.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:19.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.485 issued rwts: total=12800,13232,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.485 job3: (groupid=0, jobs=1): err= 0: pid=2912784: Mon Jul 15 10:23:56 2024 00:16:19.485 read: IOPS=4886, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1003msec) 00:16:19.485 slat (nsec): min=1224, max=2338.6k, avg=99539.15, stdev=293855.66 00:16:19.485 clat (usec): min=2068, max=25183, avg=12625.45, stdev=2056.26 00:16:19.485 lat (usec): min=2559, max=25190, avg=12724.99, stdev=2052.62 00:16:19.485 clat percentiles (usec): 00:16:19.485 | 1.00th=[ 7111], 5.00th=[11600], 10.00th=[11994], 20.00th=[12125], 00:16:19.485 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12387], 60.00th=[12518], 00:16:19.485 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12780], 95.00th=[13042], 00:16:19.485 | 99.00th=[23200], 99.50th=[23725], 99.90th=[25035], 99.95th=[25035], 00:16:19.485 | 99.99th=[25297] 00:16:19.485 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:16:19.485 slat (nsec): min=1677, max=4856.7k, avg=96954.07, stdev=305743.86 00:16:19.485 clat (usec): min=4348, max=23694, avg=12728.02, stdev=2827.68 00:16:19.485 lat (usec): min=4365, max=23701, avg=12824.97, stdev=2831.70 00:16:19.485 clat percentiles (usec): 00:16:19.485 | 1.00th=[ 5276], 5.00th=[11207], 10.00th=[11731], 20.00th=[11994], 00:16:19.485 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:16:19.485 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12911], 95.00th=[22152], 00:16:19.485 | 99.00th=[23462], 99.50th=[23462], 99.90th=[23725], 99.95th=[23725], 00:16:19.485 | 99.99th=[23725] 00:16:19.485 bw ( KiB/s): min=20480, max=20480, per=17.47%, avg=20480.00, stdev= 0.00, samples=2 00:16:19.485 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:16:19.485 lat (msec) : 4=0.17%, 10=1.92%, 20=93.45%, 50=4.46% 00:16:19.485 cpu : usr=2.20%, sys=4.79%, ctx=2360, majf=0, minf=1 00:16:19.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:19.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.485 issued rwts: total=4901,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.485 00:16:19.485 Run status group 0 (all jobs): 00:16:19.485 READ: bw=109MiB/s (115MB/s), 18.6MiB/s-49.9MiB/s (19.5MB/s-52.3MB/s), io=110MiB (115MB), run=1002-1003msec 00:16:19.485 WRITE: bw=114MiB/s (120MB/s), 19.9MiB/s-51.6MiB/s (20.9MB/s-54.1MB/s), io=115MiB (120MB), run=1002-1003msec 00:16:19.485 00:16:19.485 Disk stats (read/write): 00:16:19.485 nvme0n1: ios=4849/5120, merge=0/0, ticks=17202/18125, in_queue=35327, util=85.87% 00:16:19.485 nvme0n2: ios=4096/4191, merge=0/0, ticks=12527/13804, in_queue=26331, util=86.08% 00:16:19.485 nvme0n3: ios=10752/11067, merge=0/0, ticks=15854/15460, in_queue=31314, util=88.60% 00:16:19.485 nvme0n4: ios=4096/4307, merge=0/0, ticks=12866/13473, in_queue=26339, util=89.53% 00:16:19.485 10:23:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:19.485 10:23:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2913024 00:16:19.485 10:23:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:19.485 10:23:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:19.485 [global] 00:16:19.485 thread=1 00:16:19.485 invalidate=1 00:16:19.485 rw=read 00:16:19.485 time_based=1 00:16:19.485 runtime=10 00:16:19.485 ioengine=libaio 00:16:19.485 direct=1 00:16:19.485 bs=4096 00:16:19.485 iodepth=1 00:16:19.485 norandommap=1 00:16:19.485 numjobs=1 00:16:19.485 00:16:19.485 [job0] 00:16:19.485 filename=/dev/nvme0n1 00:16:19.485 [job1] 00:16:19.485 filename=/dev/nvme0n2 00:16:19.485 [job2] 00:16:19.485 filename=/dev/nvme0n3 00:16:19.485 [job3] 00:16:19.485 filename=/dev/nvme0n4 00:16:19.485 Could not set queue depth (nvme0n1) 00:16:19.485 Could not set queue depth (nvme0n2) 00:16:19.485 Could not set queue depth (nvme0n3) 00:16:19.485 Could not set queue depth (nvme0n4) 00:16:19.749 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.749 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.749 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.749 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:19.749 fio-3.35 00:16:19.749 Starting 4 threads 00:16:22.294 10:23:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:22.555 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=63475712, buflen=4096 00:16:22.555 fio: pid=2913305, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:22.555 10:23:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:22.555 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=75112448, buflen=4096 00:16:22.555 fio: pid=2913304, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:22.555 10:23:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:22.555 10:23:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:22.816 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=13807616, buflen=4096 00:16:22.816 fio: pid=2913302, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:22.816 10:23:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:22.816 10:23:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:23.077 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=18300928, buflen=4096 00:16:23.077 fio: pid=2913303, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:23.077 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:23.077 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:23.077 00:16:23.077 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2913302: Mon Jul 15 10:24:00 2024 00:16:23.077 read: IOPS=12.4k, BW=48.4MiB/s (50.8MB/s)(141MiB/2914msec) 00:16:23.077 slat (usec): min=5, max=16545, avg= 9.08, stdev=169.16 00:16:23.077 clat (usec): min=30, max=487, avg=70.24, stdev=45.88 00:16:23.077 lat (usec): min=50, max=16734, avg=79.33, stdev=177.49 00:16:23.077 clat percentiles (usec): 00:16:23.077 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:16:23.077 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 59], 00:16:23.077 | 70.00th=[ 62], 80.00th=[ 69], 90.00th=[ 83], 95.00th=[ 194], 00:16:23.077 | 99.00th=[ 277], 99.50th=[ 314], 99.90th=[ 371], 99.95th=[ 388], 00:16:23.077 | 99.99th=[ 453] 00:16:23.077 bw ( KiB/s): min=28400, max=62888, per=43.92%, avg=51555.20, stdev=13625.91, samples=5 00:16:23.077 iops : min= 7100, max=15722, avg=12888.80, stdev=3406.48, samples=5 00:16:23.077 lat (usec) : 50=5.94%, 100=86.93%, 250=4.87%, 500=2.26% 00:16:23.077 cpu : usr=4.84%, sys=14.97%, ctx=36144, majf=0, minf=1 00:16:23.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.077 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.077 issued rwts: total=36140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.077 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2913303: Mon Jul 15 10:24:00 2024 00:16:23.077 read: IOPS=6737, BW=26.3MiB/s (27.6MB/s)(81.5MiB/3095msec) 00:16:23.077 slat (usec): min=3, max=16804, avg=16.83, stdev=273.45 00:16:23.077 clat (usec): min=34, max=482, avg=129.16, stdev=85.79 00:16:23.077 lat (usec): min=47, max=16956, avg=145.99, stdev=289.97 00:16:23.077 clat percentiles (usec): 00:16:23.077 | 1.00th=[ 46], 5.00th=[ 49], 10.00th=[ 52], 20.00th=[ 63], 00:16:23.077 | 30.00th=[ 71], 40.00th=[ 78], 50.00th=[ 88], 60.00th=[ 106], 00:16:23.077 | 70.00th=[ 186], 80.00th=[ 225], 90.00th=[ 255], 95.00th=[ 297], 00:16:23.077 | 99.00th=[ 371], 99.50th=[ 392], 99.90th=[ 433], 99.95th=[ 457], 00:16:23.077 | 99.99th=[ 474] 00:16:23.077 bw ( KiB/s): min=19456, max=31616, per=19.70%, avg=23121.60, stdev=4991.07, samples=5 00:16:23.077 iops : min= 4864, max= 7904, avg=5780.40, stdev=1247.77, samples=5 00:16:23.077 lat (usec) : 50=6.58%, 100=49.77%, 250=32.79%, 500=10.85% 00:16:23.077 cpu : usr=4.36%, sys=12.93%, ctx=20863, majf=0, minf=1 00:16:23.077 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.077 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.077 issued rwts: total=20853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.077 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.077 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2913304: Mon Jul 15 10:24:00 2024 00:16:23.077 read: IOPS=6678, BW=26.1MiB/s (27.4MB/s)(71.6MiB/2746msec) 00:16:23.078 slat (usec): min=5, max=15815, avg=12.62, stdev=165.26 00:16:23.078 clat (usec): min=49, max=482, avg=135.05, stdev=77.26 00:16:23.078 lat (usec): min=56, max=16102, avg=147.67, stdev=185.23 00:16:23.078 clat percentiles (usec): 00:16:23.078 | 1.00th=[ 57], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 73], 00:16:23.078 | 30.00th=[ 79], 40.00th=[ 87], 50.00th=[ 100], 60.00th=[ 113], 00:16:23.078 | 70.00th=[ 194], 80.00th=[ 223], 90.00th=[ 247], 95.00th=[ 269], 00:16:23.078 | 99.00th=[ 351], 99.50th=[ 375], 99.90th=[ 408], 99.95th=[ 429], 00:16:23.078 | 99.99th=[ 453] 00:16:23.078 bw ( KiB/s): min=20680, max=37136, per=22.78%, avg=26742.40, stdev=6201.47, samples=5 00:16:23.078 iops : min= 5170, max= 9284, avg=6685.60, stdev=1550.37, samples=5 00:16:23.078 lat (usec) : 50=0.01%, 100=50.43%, 250=40.36%, 500=9.19% 00:16:23.078 cpu : usr=3.53%, sys=11.07%, ctx=18342, majf=0, minf=1 00:16:23.078 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.078 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.078 issued rwts: total=18339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.078 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.078 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2913305: Mon Jul 15 10:24:00 2024 00:16:23.078 read: IOPS=6013, BW=23.5MiB/s (24.6MB/s)(60.5MiB/2577msec) 00:16:23.078 slat (nsec): min=5292, max=75011, avg=14429.46, stdev=11000.25 00:16:23.078 clat (usec): min=46, max=646, avg=149.00, stdev=84.93 00:16:23.078 lat (usec): min=56, max=652, avg=163.43, stdev=91.84 00:16:23.078 clat percentiles (usec): 00:16:23.078 | 1.00th=[ 57], 5.00th=[ 62], 10.00th=[ 69], 20.00th=[ 77], 00:16:23.078 | 30.00th=[ 84], 40.00th=[ 96], 50.00th=[ 108], 60.00th=[ 133], 00:16:23.078 | 70.00th=[ 204], 80.00th=[ 233], 90.00th=[ 265], 95.00th=[ 322], 00:16:23.078 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 424], 99.95th=[ 445], 00:16:23.078 | 99.99th=[ 478] 00:16:23.078 bw ( KiB/s): min=18152, max=32160, per=20.40%, avg=23950.40, stdev=6260.50, samples=5 00:16:23.078 iops : min= 4538, max= 8040, avg=5987.60, stdev=1565.13, samples=5 00:16:23.078 lat (usec) : 50=0.01%, 100=43.11%, 250=43.52%, 500=13.36%, 750=0.01% 00:16:23.078 cpu : usr=4.70%, sys=12.97%, ctx=15500, majf=0, minf=2 00:16:23.078 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:23.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.078 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:23.078 issued rwts: total=15498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:23.078 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:23.078 00:16:23.078 Run status group 0 (all jobs): 00:16:23.078 READ: bw=115MiB/s (120MB/s), 23.5MiB/s-48.4MiB/s (24.6MB/s-50.8MB/s), io=355MiB (372MB), run=2577-3095msec 00:16:23.078 00:16:23.078 Disk stats (read/write): 00:16:23.078 nvme0n1: ios=35210/0, merge=0/0, ticks=2013/0, in_queue=2013, util=93.02% 00:16:23.078 nvme0n2: ios=17497/0, merge=0/0, ticks=1806/0, in_queue=1806, util=93.46% 00:16:23.078 nvme0n3: ios=17423/0, merge=0/0, ticks=1910/0, in_queue=1910, util=96.08% 00:16:23.078 nvme0n4: ios=14264/0, merge=0/0, ticks=1446/0, in_queue=1446, util=96.03% 00:16:23.078 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:23.078 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:23.339 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:23.339 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:23.609 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:23.609 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:23.609 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:23.609 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:23.871 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:23.871 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 2913024 00:16:23.871 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:23.871 10:24:00 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.253 10:24:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.253 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:16:25.253 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:25.253 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.253 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:25.253 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.253 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:16:25.253 10:24:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:25.253 10:24:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:25.253 nvmf hotplug test: fio failed as expected 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:25.254 rmmod nvme_rdma 00:16:25.254 rmmod nvme_fabrics 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2909432 ']' 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2909432 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 2909432 ']' 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 2909432 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2909432 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2909432' 00:16:25.254 killing process with pid 2909432 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 2909432 00:16:25.254 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 2909432 00:16:25.514 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.514 10:24:02 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:25.514 00:16:25.514 real 0m27.935s 00:16:25.514 user 2m38.004s 00:16:25.514 sys 0m10.439s 00:16:25.514 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:25.514 10:24:02 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.514 ************************************ 00:16:25.514 END TEST nvmf_fio_target 00:16:25.514 ************************************ 00:16:25.514 10:24:02 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:16:25.514 10:24:02 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:16:25.514 10:24:02 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:25.514 10:24:02 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:25.514 10:24:02 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:25.515 ************************************ 00:16:25.515 START TEST nvmf_bdevio 00:16:25.515 ************************************ 00:16:25.515 10:24:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:16:25.776 * Looking for test storage... 00:16:25.776 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.776 10:24:02 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:25.777 10:24:02 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:16:33.917 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:16:33.917 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:16:33.918 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:16:33.918 Found net devices under 0000:98:00.0: mlx_0_0 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:16:33.918 Found net devices under 0000:98:00.1: mlx_0_1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:33.918 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:33.918 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:16:33.918 altname enp152s0f0np0 00:16:33.918 altname ens817f0np0 00:16:33.918 inet 192.168.100.8/24 scope global mlx_0_0 00:16:33.918 valid_lft forever preferred_lft forever 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:33.918 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:33.918 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:16:33.918 altname enp152s0f1np1 00:16:33.918 altname ens817f1np1 00:16:33.918 inet 192.168.100.9/24 scope global mlx_0_1 00:16:33.918 valid_lft forever preferred_lft forever 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:33.918 192.168.100.9' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:33.918 192.168.100.9' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:33.918 192.168.100.9' 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:16:33.918 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2919227 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2919227 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 2919227 ']' 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.919 10:24:10 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:33.919 [2024-07-15 10:24:10.901420] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:33.919 [2024-07-15 10:24:10.901490] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.919 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.919 [2024-07-15 10:24:10.984133] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:33.919 [2024-07-15 10:24:11.075987] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.919 [2024-07-15 10:24:11.076049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.919 [2024-07-15 10:24:11.076058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.919 [2024-07-15 10:24:11.076065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.919 [2024-07-15 10:24:11.076071] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.919 [2024-07-15 10:24:11.076251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:33.919 [2024-07-15 10:24:11.076384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:33.919 [2024-07-15 10:24:11.076696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:33.919 [2024-07-15 10:24:11.076698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.857 [2024-07-15 10:24:11.778173] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17a3b40/0x17a8030) succeed. 00:16:34.857 [2024-07-15 10:24:11.793573] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17a5180/0x17e96c0) succeed. 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.857 Malloc0 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.857 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.858 10:24:11 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:34.858 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.858 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.858 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.858 10:24:11 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:34.858 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.858 10:24:11 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:34.858 [2024-07-15 10:24:12.005539] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:34.858 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.858 10:24:12 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:34.858 10:24:12 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:34.858 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:34.858 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:34.858 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:34.858 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:34.858 { 00:16:34.858 "params": { 00:16:34.858 "name": "Nvme$subsystem", 00:16:34.858 "trtype": "$TEST_TRANSPORT", 00:16:34.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.858 "adrfam": "ipv4", 00:16:34.858 "trsvcid": "$NVMF_PORT", 00:16:34.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.858 "hdgst": ${hdgst:-false}, 00:16:34.858 "ddgst": ${ddgst:-false} 00:16:34.858 }, 00:16:34.858 "method": "bdev_nvme_attach_controller" 00:16:34.858 } 00:16:34.858 EOF 00:16:34.858 )") 00:16:34.858 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:34.858 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:34.858 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:34.858 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:34.858 "params": { 00:16:34.858 "name": "Nvme1", 00:16:34.858 "trtype": "rdma", 00:16:34.858 "traddr": "192.168.100.8", 00:16:34.858 "adrfam": "ipv4", 00:16:34.858 "trsvcid": "4420", 00:16:34.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.858 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.858 "hdgst": false, 00:16:34.858 "ddgst": false 00:16:34.858 }, 00:16:34.858 "method": "bdev_nvme_attach_controller" 00:16:34.858 }' 00:16:35.117 [2024-07-15 10:24:12.060430] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:35.117 [2024-07-15 10:24:12.060501] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919429 ] 00:16:35.117 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.117 [2024-07-15 10:24:12.135501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:35.117 [2024-07-15 10:24:12.212335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.117 [2024-07-15 10:24:12.212620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.117 [2024-07-15 10:24:12.212625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.377 I/O targets: 00:16:35.377 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:35.377 00:16:35.377 00:16:35.377 CUnit - A unit testing framework for C - Version 2.1-3 00:16:35.377 http://cunit.sourceforge.net/ 00:16:35.377 00:16:35.377 00:16:35.377 Suite: bdevio tests on: Nvme1n1 00:16:35.377 Test: blockdev write read block ...passed 00:16:35.377 Test: blockdev write zeroes read block ...passed 00:16:35.377 Test: blockdev write zeroes read no split ...passed 00:16:35.377 Test: blockdev write zeroes read split ...passed 00:16:35.377 Test: blockdev write zeroes read split partial ...passed 00:16:35.377 Test: blockdev reset ...[2024-07-15 10:24:12.433667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:35.377 [2024-07-15 10:24:12.463282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:16:35.377 [2024-07-15 10:24:12.503641] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:35.377 passed 00:16:35.377 Test: blockdev write read 8 blocks ...passed 00:16:35.377 Test: blockdev write read size > 128k ...passed 00:16:35.377 Test: blockdev write read invalid size ...passed 00:16:35.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:35.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:35.377 Test: blockdev write read max offset ...passed 00:16:35.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:35.377 Test: blockdev writev readv 8 blocks ...passed 00:16:35.377 Test: blockdev writev readv 30 x 1block ...passed 00:16:35.377 Test: blockdev writev readv block ...passed 00:16:35.377 Test: blockdev writev readv size > 128k ...passed 00:16:35.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:35.377 Test: blockdev comparev and writev ...[2024-07-15 10:24:12.509659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.377 [2024-07-15 10:24:12.509685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:35.377 [2024-07-15 10:24:12.509693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.377 [2024-07-15 10:24:12.509698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:35.377 [2024-07-15 10:24:12.509881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.377 [2024-07-15 10:24:12.509888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:35.377 [2024-07-15 10:24:12.509894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.377 [2024-07-15 10:24:12.509899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:35.377 [2024-07-15 10:24:12.510041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.377 [2024-07-15 10:24:12.510047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:35.377 [2024-07-15 10:24:12.510054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.377 [2024-07-15 10:24:12.510059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:35.377 [2024-07-15 10:24:12.510266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.377 [2024-07-15 10:24:12.510273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:35.377 [2024-07-15 10:24:12.510279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:35.377 [2024-07-15 10:24:12.510288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:35.377 passed 00:16:35.377 Test: blockdev nvme passthru rw ...passed 00:16:35.377 Test: blockdev nvme passthru vendor specific ...[2024-07-15 10:24:12.510947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:35.377 [2024-07-15 10:24:12.510955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:35.377 [2024-07-15 10:24:12.510999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:35.377 [2024-07-15 10:24:12.511005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:35.377 [2024-07-15 10:24:12.511054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:35.377 [2024-07-15 10:24:12.511060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:35.377 [2024-07-15 10:24:12.511096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:16:35.377 [2024-07-15 10:24:12.511101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:35.377 passed 00:16:35.377 Test: blockdev nvme admin passthru ...passed 00:16:35.377 Test: blockdev copy ...passed 00:16:35.377 00:16:35.377 Run Summary: Type Total Ran Passed Failed Inactive 00:16:35.377 suites 1 1 n/a 0 0 00:16:35.377 tests 23 23 23 0 0 00:16:35.377 asserts 152 152 152 0 n/a 00:16:35.377 00:16:35.377 Elapsed time = 0.231 seconds 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:35.638 rmmod nvme_rdma 00:16:35.638 rmmod nvme_fabrics 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2919227 ']' 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2919227 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 2919227 ']' 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 2919227 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2919227 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2919227' 00:16:35.638 killing process with pid 2919227 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 2919227 00:16:35.638 10:24:12 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 2919227 00:16:35.898 10:24:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:35.898 10:24:13 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:35.898 00:16:35.898 real 0m10.380s 00:16:35.898 user 0m11.415s 00:16:35.898 sys 0m6.495s 00:16:35.898 10:24:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:35.898 10:24:13 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:35.898 ************************************ 00:16:35.898 END TEST nvmf_bdevio 00:16:35.898 ************************************ 00:16:36.159 10:24:13 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:16:36.159 10:24:13 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:36.159 10:24:13 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:36.159 10:24:13 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.159 10:24:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:36.159 ************************************ 00:16:36.159 START TEST nvmf_auth_target 00:16:36.159 ************************************ 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:16:36.159 * Looking for test storage... 00:16:36.159 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:36.159 10:24:13 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:36.160 10:24:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:16:44.309 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:16:44.309 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:16:44.309 Found net devices under 0000:98:00.0: mlx_0_0 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:16:44.309 Found net devices under 0000:98:00.1: mlx_0_1 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:44.309 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:16:44.310 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:44.310 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:16:44.310 altname enp152s0f0np0 00:16:44.310 altname ens817f0np0 00:16:44.310 inet 192.168.100.8/24 scope global mlx_0_0 00:16:44.310 valid_lft forever preferred_lft forever 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:16:44.310 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:44.310 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:16:44.310 altname enp152s0f1np1 00:16:44.310 altname ens817f1np1 00:16:44.310 inet 192.168.100.9/24 scope global mlx_0_1 00:16:44.310 valid_lft forever preferred_lft forever 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:44.310 192.168.100.9' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:44.310 192.168.100.9' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:44.310 192.168.100.9' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2923927 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2923927 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2923927 ']' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.310 10:24:21 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2923963 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=19ea6bfaf39ac1e4d97d391b553cae6a70bf9757f382dba2 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:45.250 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.zLy 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 19ea6bfaf39ac1e4d97d391b553cae6a70bf9757f382dba2 0 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 19ea6bfaf39ac1e4d97d391b553cae6a70bf9757f382dba2 0 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=19ea6bfaf39ac1e4d97d391b553cae6a70bf9757f382dba2 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.zLy 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.zLy 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.zLy 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2cf4837622f158aeb9f7a40fa7e173fec917b864b5754f671efb69c107e133fe 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.TRk 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2cf4837622f158aeb9f7a40fa7e173fec917b864b5754f671efb69c107e133fe 3 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2cf4837622f158aeb9f7a40fa7e173fec917b864b5754f671efb69c107e133fe 3 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2cf4837622f158aeb9f7a40fa7e173fec917b864b5754f671efb69c107e133fe 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.TRk 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.TRk 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.TRk 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d579715c93264471f9151bedbc309b78 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:45.251 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VoM 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d579715c93264471f9151bedbc309b78 1 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d579715c93264471f9151bedbc309b78 1 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d579715c93264471f9151bedbc309b78 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VoM 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VoM 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.VoM 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a06da5d70a4ea121fe2172dcd9b1453b8516acbb1624af32 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Vej 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a06da5d70a4ea121fe2172dcd9b1453b8516acbb1624af32 2 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a06da5d70a4ea121fe2172dcd9b1453b8516acbb1624af32 2 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a06da5d70a4ea121fe2172dcd9b1453b8516acbb1624af32 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Vej 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Vej 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Vej 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=48f1d6d922bc17d573e862480fb1ac3261dcae6e57cf60b4 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.trZ 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 48f1d6d922bc17d573e862480fb1ac3261dcae6e57cf60b4 2 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 48f1d6d922bc17d573e862480fb1ac3261dcae6e57cf60b4 2 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=48f1d6d922bc17d573e862480fb1ac3261dcae6e57cf60b4 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.trZ 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.trZ 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.trZ 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:45.512 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b599aaba928c08eb495c42c2abf58ffe 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.zKA 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b599aaba928c08eb495c42c2abf58ffe 1 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b599aaba928c08eb495c42c2abf58ffe 1 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b599aaba928c08eb495c42c2abf58ffe 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.zKA 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.zKA 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.zKA 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5b6685646ec6ccebc4a32c3dd4ba3ec7bbc755b631b5bf195754ac9d0d674379 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Nvf 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5b6685646ec6ccebc4a32c3dd4ba3ec7bbc755b631b5bf195754ac9d0d674379 3 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5b6685646ec6ccebc4a32c3dd4ba3ec7bbc755b631b5bf195754ac9d0d674379 3 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5b6685646ec6ccebc4a32c3dd4ba3ec7bbc755b631b5bf195754ac9d0d674379 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:45.513 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Nvf 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Nvf 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Nvf 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2923927 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2923927 ']' 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2923963 /var/tmp/host.sock 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2923963 ']' 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:45.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.773 10:24:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.zLy 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.zLy 00:16:46.033 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.zLy 00:16:46.292 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.TRk ]] 00:16:46.292 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TRk 00:16:46.292 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.292 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.292 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.292 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TRk 00:16:46.292 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TRk 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VoM 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.VoM 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.VoM 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Vej ]] 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vej 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vej 00:16:46.552 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Vej 00:16:46.812 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:46.812 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.trZ 00:16:46.812 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.812 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.812 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.812 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.trZ 00:16:46.812 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.trZ 00:16:46.812 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.zKA ]] 00:16:46.812 10:24:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zKA 00:16:46.812 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.812 10:24:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.812 10:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.812 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zKA 00:16:46.812 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.zKA 00:16:47.073 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:47.073 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Nvf 00:16:47.073 10:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.073 10:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.073 10:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.073 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Nvf 00:16:47.073 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Nvf 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.333 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.593 00:16:47.593 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.593 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.593 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.852 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.852 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.852 10:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.852 10:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.852 10:24:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.852 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.852 { 00:16:47.852 "cntlid": 1, 00:16:47.852 "qid": 0, 00:16:47.852 "state": "enabled", 00:16:47.852 "thread": "nvmf_tgt_poll_group_000", 00:16:47.852 "listen_address": { 00:16:47.852 "trtype": "RDMA", 00:16:47.852 "adrfam": "IPv4", 00:16:47.852 "traddr": "192.168.100.8", 00:16:47.852 "trsvcid": "4420" 00:16:47.852 }, 00:16:47.852 "peer_address": { 00:16:47.852 "trtype": "RDMA", 00:16:47.852 "adrfam": "IPv4", 00:16:47.852 "traddr": "192.168.100.8", 00:16:47.852 "trsvcid": "50767" 00:16:47.852 }, 00:16:47.852 "auth": { 00:16:47.852 "state": "completed", 00:16:47.852 "digest": "sha256", 00:16:47.852 "dhgroup": "null" 00:16:47.852 } 00:16:47.852 } 00:16:47.852 ]' 00:16:47.852 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.852 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.852 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.852 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:47.852 10:24:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.852 10:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.852 10:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.852 10:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.111 10:24:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:16:49.072 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.073 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:49.073 10:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.073 10:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.073 10:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.073 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.073 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:49.073 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:49.332 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:49.333 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.333 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.333 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:49.333 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.333 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.333 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.333 10:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.333 10:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.333 10:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.333 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.333 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.592 00:16:49.592 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.592 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.592 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.592 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.592 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.592 10:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.592 10:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.592 10:24:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.592 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.592 { 00:16:49.592 "cntlid": 3, 00:16:49.592 "qid": 0, 00:16:49.592 "state": "enabled", 00:16:49.592 "thread": "nvmf_tgt_poll_group_000", 00:16:49.592 "listen_address": { 00:16:49.592 "trtype": "RDMA", 00:16:49.592 "adrfam": "IPv4", 00:16:49.592 "traddr": "192.168.100.8", 00:16:49.592 "trsvcid": "4420" 00:16:49.592 }, 00:16:49.592 "peer_address": { 00:16:49.592 "trtype": "RDMA", 00:16:49.592 "adrfam": "IPv4", 00:16:49.592 "traddr": "192.168.100.8", 00:16:49.592 "trsvcid": "48852" 00:16:49.592 }, 00:16:49.592 "auth": { 00:16:49.592 "state": "completed", 00:16:49.592 "digest": "sha256", 00:16:49.592 "dhgroup": "null" 00:16:49.592 } 00:16:49.592 } 00:16:49.592 ]' 00:16:49.592 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.592 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.852 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.852 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:49.852 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.852 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.852 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.852 10:24:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.111 10:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:16:50.681 10:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.941 10:24:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:50.941 10:24:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.941 10:24:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.941 10:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.941 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.941 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:50.941 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.201 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.201 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.459 { 00:16:51.459 "cntlid": 5, 00:16:51.459 "qid": 0, 00:16:51.459 "state": "enabled", 00:16:51.459 "thread": "nvmf_tgt_poll_group_000", 00:16:51.459 "listen_address": { 00:16:51.459 "trtype": "RDMA", 00:16:51.459 "adrfam": "IPv4", 00:16:51.459 "traddr": "192.168.100.8", 00:16:51.459 "trsvcid": "4420" 00:16:51.459 }, 00:16:51.459 "peer_address": { 00:16:51.459 "trtype": "RDMA", 00:16:51.459 "adrfam": "IPv4", 00:16:51.459 "traddr": "192.168.100.8", 00:16:51.459 "trsvcid": "49367" 00:16:51.459 }, 00:16:51.459 "auth": { 00:16:51.459 "state": "completed", 00:16:51.459 "digest": "sha256", 00:16:51.459 "dhgroup": "null" 00:16:51.459 } 00:16:51.459 } 00:16:51.459 ]' 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.459 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.719 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:51.719 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.719 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.719 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.719 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.719 10:24:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:16:52.659 10:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.659 10:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:52.659 10:24:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.659 10:24:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.970 10:24:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.970 10:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.970 10:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.970 10:24:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.970 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.230 00:16:53.230 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.230 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.230 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.489 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.489 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.489 10:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.489 10:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.489 10:24:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.490 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.490 { 00:16:53.490 "cntlid": 7, 00:16:53.490 "qid": 0, 00:16:53.490 "state": "enabled", 00:16:53.490 "thread": "nvmf_tgt_poll_group_000", 00:16:53.490 "listen_address": { 00:16:53.490 "trtype": "RDMA", 00:16:53.490 "adrfam": "IPv4", 00:16:53.490 "traddr": "192.168.100.8", 00:16:53.490 "trsvcid": "4420" 00:16:53.490 }, 00:16:53.490 "peer_address": { 00:16:53.490 "trtype": "RDMA", 00:16:53.490 "adrfam": "IPv4", 00:16:53.490 "traddr": "192.168.100.8", 00:16:53.490 "trsvcid": "54814" 00:16:53.490 }, 00:16:53.490 "auth": { 00:16:53.490 "state": "completed", 00:16:53.490 "digest": "sha256", 00:16:53.490 "dhgroup": "null" 00:16:53.490 } 00:16:53.490 } 00:16:53.490 ]' 00:16:53.490 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.490 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.490 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.490 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:53.490 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.490 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.490 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.490 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.750 10:24:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:16:54.320 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.580 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.839 00:16:54.839 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.839 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.839 10:24:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.099 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.099 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.099 10:24:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.099 10:24:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.099 10:24:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.099 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.099 { 00:16:55.099 "cntlid": 9, 00:16:55.099 "qid": 0, 00:16:55.099 "state": "enabled", 00:16:55.099 "thread": "nvmf_tgt_poll_group_000", 00:16:55.099 "listen_address": { 00:16:55.099 "trtype": "RDMA", 00:16:55.099 "adrfam": "IPv4", 00:16:55.099 "traddr": "192.168.100.8", 00:16:55.099 "trsvcid": "4420" 00:16:55.099 }, 00:16:55.099 "peer_address": { 00:16:55.099 "trtype": "RDMA", 00:16:55.099 "adrfam": "IPv4", 00:16:55.099 "traddr": "192.168.100.8", 00:16:55.099 "trsvcid": "57903" 00:16:55.099 }, 00:16:55.099 "auth": { 00:16:55.099 "state": "completed", 00:16:55.099 "digest": "sha256", 00:16:55.099 "dhgroup": "ffdhe2048" 00:16:55.099 } 00:16:55.099 } 00:16:55.099 ]' 00:16:55.099 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.099 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:55.099 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.099 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:55.099 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.359 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.359 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.359 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.359 10:24:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:16:56.298 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.298 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:56.298 10:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.298 10:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.298 10:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.298 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.298 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:56.298 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.558 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.818 00:16:56.818 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.818 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.818 10:24:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.818 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.818 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.818 10:24:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.818 10:24:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.079 10:24:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.079 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.079 { 00:16:57.079 "cntlid": 11, 00:16:57.079 "qid": 0, 00:16:57.079 "state": "enabled", 00:16:57.079 "thread": "nvmf_tgt_poll_group_000", 00:16:57.079 "listen_address": { 00:16:57.079 "trtype": "RDMA", 00:16:57.079 "adrfam": "IPv4", 00:16:57.079 "traddr": "192.168.100.8", 00:16:57.080 "trsvcid": "4420" 00:16:57.080 }, 00:16:57.080 "peer_address": { 00:16:57.080 "trtype": "RDMA", 00:16:57.080 "adrfam": "IPv4", 00:16:57.080 "traddr": "192.168.100.8", 00:16:57.080 "trsvcid": "56540" 00:16:57.080 }, 00:16:57.080 "auth": { 00:16:57.080 "state": "completed", 00:16:57.080 "digest": "sha256", 00:16:57.080 "dhgroup": "ffdhe2048" 00:16:57.080 } 00:16:57.080 } 00:16:57.080 ]' 00:16:57.080 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.080 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.080 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.080 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:57.080 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.080 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.080 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.080 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.341 10:24:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.283 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.575 00:16:58.575 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.575 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.575 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.897 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.897 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.897 10:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.897 10:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.897 10:24:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.897 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.897 { 00:16:58.897 "cntlid": 13, 00:16:58.897 "qid": 0, 00:16:58.897 "state": "enabled", 00:16:58.897 "thread": "nvmf_tgt_poll_group_000", 00:16:58.897 "listen_address": { 00:16:58.897 "trtype": "RDMA", 00:16:58.897 "adrfam": "IPv4", 00:16:58.897 "traddr": "192.168.100.8", 00:16:58.897 "trsvcid": "4420" 00:16:58.897 }, 00:16:58.897 "peer_address": { 00:16:58.897 "trtype": "RDMA", 00:16:58.897 "adrfam": "IPv4", 00:16:58.897 "traddr": "192.168.100.8", 00:16:58.897 "trsvcid": "34997" 00:16:58.897 }, 00:16:58.897 "auth": { 00:16:58.897 "state": "completed", 00:16:58.897 "digest": "sha256", 00:16:58.897 "dhgroup": "ffdhe2048" 00:16:58.897 } 00:16:58.897 } 00:16:58.897 ]' 00:16:58.897 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.897 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.897 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.897 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:58.897 10:24:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.897 10:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.898 10:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.898 10:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.158 10:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:17:00.098 10:24:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.098 10:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.358 10:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.358 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.358 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:00.358 00:17:00.358 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.358 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.358 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.617 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.617 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.617 10:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.617 10:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.617 10:24:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.618 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.618 { 00:17:00.618 "cntlid": 15, 00:17:00.618 "qid": 0, 00:17:00.618 "state": "enabled", 00:17:00.618 "thread": "nvmf_tgt_poll_group_000", 00:17:00.618 "listen_address": { 00:17:00.618 "trtype": "RDMA", 00:17:00.618 "adrfam": "IPv4", 00:17:00.618 "traddr": "192.168.100.8", 00:17:00.618 "trsvcid": "4420" 00:17:00.618 }, 00:17:00.618 "peer_address": { 00:17:00.618 "trtype": "RDMA", 00:17:00.618 "adrfam": "IPv4", 00:17:00.618 "traddr": "192.168.100.8", 00:17:00.618 "trsvcid": "53954" 00:17:00.618 }, 00:17:00.618 "auth": { 00:17:00.618 "state": "completed", 00:17:00.618 "digest": "sha256", 00:17:00.618 "dhgroup": "ffdhe2048" 00:17:00.618 } 00:17:00.618 } 00:17:00.618 ]' 00:17:00.618 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.618 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.618 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.618 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:00.618 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.878 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.878 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.878 10:24:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.878 10:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:17:01.821 10:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.821 10:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:01.821 10:24:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.821 10:24:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.821 10:24:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.821 10:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.821 10:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.821 10:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:01.821 10:24:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.083 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.083 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.345 { 00:17:02.345 "cntlid": 17, 00:17:02.345 "qid": 0, 00:17:02.345 "state": "enabled", 00:17:02.345 "thread": "nvmf_tgt_poll_group_000", 00:17:02.345 "listen_address": { 00:17:02.345 "trtype": "RDMA", 00:17:02.345 "adrfam": "IPv4", 00:17:02.345 "traddr": "192.168.100.8", 00:17:02.345 "trsvcid": "4420" 00:17:02.345 }, 00:17:02.345 "peer_address": { 00:17:02.345 "trtype": "RDMA", 00:17:02.345 "adrfam": "IPv4", 00:17:02.345 "traddr": "192.168.100.8", 00:17:02.345 "trsvcid": "41277" 00:17:02.345 }, 00:17:02.345 "auth": { 00:17:02.345 "state": "completed", 00:17:02.345 "digest": "sha256", 00:17:02.345 "dhgroup": "ffdhe3072" 00:17:02.345 } 00:17:02.345 } 00:17:02.345 ]' 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.345 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.607 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:02.607 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.607 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.607 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.607 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.607 10:24:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:17:03.550 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.550 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:03.550 10:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.550 10:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.550 10:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.550 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.550 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:03.550 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.811 10:24:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.072 00:17:04.072 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.072 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.072 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.072 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.072 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.072 10:24:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.072 10:24:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.072 10:24:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.072 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.072 { 00:17:04.072 "cntlid": 19, 00:17:04.072 "qid": 0, 00:17:04.072 "state": "enabled", 00:17:04.072 "thread": "nvmf_tgt_poll_group_000", 00:17:04.072 "listen_address": { 00:17:04.072 "trtype": "RDMA", 00:17:04.072 "adrfam": "IPv4", 00:17:04.072 "traddr": "192.168.100.8", 00:17:04.072 "trsvcid": "4420" 00:17:04.072 }, 00:17:04.072 "peer_address": { 00:17:04.072 "trtype": "RDMA", 00:17:04.072 "adrfam": "IPv4", 00:17:04.072 "traddr": "192.168.100.8", 00:17:04.072 "trsvcid": "59949" 00:17:04.072 }, 00:17:04.072 "auth": { 00:17:04.072 "state": "completed", 00:17:04.072 "digest": "sha256", 00:17:04.072 "dhgroup": "ffdhe3072" 00:17:04.072 } 00:17:04.072 } 00:17:04.072 ]' 00:17:04.072 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.333 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:04.333 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.333 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:04.333 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.333 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.333 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.333 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.594 10:24:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:17:05.165 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.425 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:05.425 10:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.425 10:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.425 10:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.425 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.425 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.425 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:05.685 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:05.685 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.685 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:05.685 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:05.685 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:05.685 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.685 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.685 10:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.685 10:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.685 10:24:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.686 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.686 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.947 00:17:05.947 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.947 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.947 10:24:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.947 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.947 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.947 10:24:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.947 10:24:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.947 10:24:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.947 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.947 { 00:17:05.947 "cntlid": 21, 00:17:05.947 "qid": 0, 00:17:05.947 "state": "enabled", 00:17:05.947 "thread": "nvmf_tgt_poll_group_000", 00:17:05.947 "listen_address": { 00:17:05.947 "trtype": "RDMA", 00:17:05.947 "adrfam": "IPv4", 00:17:05.947 "traddr": "192.168.100.8", 00:17:05.947 "trsvcid": "4420" 00:17:05.947 }, 00:17:05.947 "peer_address": { 00:17:05.947 "trtype": "RDMA", 00:17:05.947 "adrfam": "IPv4", 00:17:05.947 "traddr": "192.168.100.8", 00:17:05.947 "trsvcid": "41234" 00:17:05.947 }, 00:17:05.947 "auth": { 00:17:05.947 "state": "completed", 00:17:05.947 "digest": "sha256", 00:17:05.947 "dhgroup": "ffdhe3072" 00:17:05.947 } 00:17:05.947 } 00:17:05.947 ]' 00:17:05.947 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.217 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.217 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.217 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:06.217 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.217 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.217 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.217 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.217 10:24:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:17:07.163 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.163 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:07.163 10:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.163 10:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.163 10:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.163 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.163 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.163 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.423 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.423 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.686 { 00:17:07.686 "cntlid": 23, 00:17:07.686 "qid": 0, 00:17:07.686 "state": "enabled", 00:17:07.686 "thread": "nvmf_tgt_poll_group_000", 00:17:07.686 "listen_address": { 00:17:07.686 "trtype": "RDMA", 00:17:07.686 "adrfam": "IPv4", 00:17:07.686 "traddr": "192.168.100.8", 00:17:07.686 "trsvcid": "4420" 00:17:07.686 }, 00:17:07.686 "peer_address": { 00:17:07.686 "trtype": "RDMA", 00:17:07.686 "adrfam": "IPv4", 00:17:07.686 "traddr": "192.168.100.8", 00:17:07.686 "trsvcid": "33291" 00:17:07.686 }, 00:17:07.686 "auth": { 00:17:07.686 "state": "completed", 00:17:07.686 "digest": "sha256", 00:17:07.686 "dhgroup": "ffdhe3072" 00:17:07.686 } 00:17:07.686 } 00:17:07.686 ]' 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.686 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.947 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:07.947 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.947 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.947 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.947 10:24:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.947 10:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:17:08.891 10:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.891 10:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:08.891 10:24:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.891 10:24:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.891 10:24:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.891 10:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.891 10:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.891 10:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:08.891 10:24:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.152 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.413 00:17:09.413 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.413 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.413 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.413 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.413 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.413 10:24:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.414 10:24:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.414 10:24:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.414 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.414 { 00:17:09.414 "cntlid": 25, 00:17:09.414 "qid": 0, 00:17:09.414 "state": "enabled", 00:17:09.414 "thread": "nvmf_tgt_poll_group_000", 00:17:09.414 "listen_address": { 00:17:09.414 "trtype": "RDMA", 00:17:09.414 "adrfam": "IPv4", 00:17:09.414 "traddr": "192.168.100.8", 00:17:09.414 "trsvcid": "4420" 00:17:09.414 }, 00:17:09.414 "peer_address": { 00:17:09.414 "trtype": "RDMA", 00:17:09.414 "adrfam": "IPv4", 00:17:09.414 "traddr": "192.168.100.8", 00:17:09.414 "trsvcid": "35770" 00:17:09.414 }, 00:17:09.414 "auth": { 00:17:09.414 "state": "completed", 00:17:09.414 "digest": "sha256", 00:17:09.414 "dhgroup": "ffdhe4096" 00:17:09.414 } 00:17:09.414 } 00:17:09.414 ]' 00:17:09.414 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.414 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.414 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.675 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:09.675 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.675 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.675 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.675 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.675 10:24:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:17:10.617 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.617 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:10.617 10:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.617 10:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.617 10:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.617 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.617 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:10.617 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.879 10:24:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.140 00:17:11.140 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.140 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.141 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.141 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.141 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.141 10:24:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.141 10:24:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.141 10:24:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.141 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.141 { 00:17:11.141 "cntlid": 27, 00:17:11.141 "qid": 0, 00:17:11.141 "state": "enabled", 00:17:11.141 "thread": "nvmf_tgt_poll_group_000", 00:17:11.141 "listen_address": { 00:17:11.141 "trtype": "RDMA", 00:17:11.141 "adrfam": "IPv4", 00:17:11.141 "traddr": "192.168.100.8", 00:17:11.141 "trsvcid": "4420" 00:17:11.141 }, 00:17:11.141 "peer_address": { 00:17:11.141 "trtype": "RDMA", 00:17:11.141 "adrfam": "IPv4", 00:17:11.141 "traddr": "192.168.100.8", 00:17:11.141 "trsvcid": "37491" 00:17:11.141 }, 00:17:11.141 "auth": { 00:17:11.141 "state": "completed", 00:17:11.141 "digest": "sha256", 00:17:11.141 "dhgroup": "ffdhe4096" 00:17:11.141 } 00:17:11.141 } 00:17:11.141 ]' 00:17:11.141 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.141 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.141 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.402 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:11.402 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.402 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.402 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.402 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.662 10:24:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:17:12.233 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.494 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.754 00:17:12.754 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.754 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.754 10:24:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.015 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.015 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.015 10:24:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.015 10:24:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.015 10:24:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.015 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.015 { 00:17:13.015 "cntlid": 29, 00:17:13.015 "qid": 0, 00:17:13.015 "state": "enabled", 00:17:13.015 "thread": "nvmf_tgt_poll_group_000", 00:17:13.015 "listen_address": { 00:17:13.015 "trtype": "RDMA", 00:17:13.015 "adrfam": "IPv4", 00:17:13.015 "traddr": "192.168.100.8", 00:17:13.015 "trsvcid": "4420" 00:17:13.015 }, 00:17:13.015 "peer_address": { 00:17:13.015 "trtype": "RDMA", 00:17:13.015 "adrfam": "IPv4", 00:17:13.015 "traddr": "192.168.100.8", 00:17:13.015 "trsvcid": "50542" 00:17:13.015 }, 00:17:13.015 "auth": { 00:17:13.015 "state": "completed", 00:17:13.015 "digest": "sha256", 00:17:13.015 "dhgroup": "ffdhe4096" 00:17:13.015 } 00:17:13.015 } 00:17:13.015 ]' 00:17:13.015 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.015 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.015 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.015 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:13.015 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.275 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.275 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.275 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.275 10:24:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:17:14.214 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.214 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:14.214 10:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.214 10:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.214 10:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.214 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.214 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.214 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.474 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.736 00:17:14.736 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.736 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.736 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.736 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.736 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.736 10:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.736 10:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.736 10:24:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.736 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.736 { 00:17:14.736 "cntlid": 31, 00:17:14.736 "qid": 0, 00:17:14.736 "state": "enabled", 00:17:14.736 "thread": "nvmf_tgt_poll_group_000", 00:17:14.736 "listen_address": { 00:17:14.736 "trtype": "RDMA", 00:17:14.736 "adrfam": "IPv4", 00:17:14.736 "traddr": "192.168.100.8", 00:17:14.736 "trsvcid": "4420" 00:17:14.736 }, 00:17:14.736 "peer_address": { 00:17:14.736 "trtype": "RDMA", 00:17:14.736 "adrfam": "IPv4", 00:17:14.736 "traddr": "192.168.100.8", 00:17:14.736 "trsvcid": "54362" 00:17:14.736 }, 00:17:14.736 "auth": { 00:17:14.736 "state": "completed", 00:17:14.736 "digest": "sha256", 00:17:14.736 "dhgroup": "ffdhe4096" 00:17:14.736 } 00:17:14.736 } 00:17:14.736 ]' 00:17:14.736 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.736 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.997 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.997 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:14.997 10:24:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.997 10:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.997 10:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.997 10:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.258 10:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:17:15.830 10:24:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.830 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:15.830 10:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.830 10:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.092 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.353 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.614 { 00:17:16.614 "cntlid": 33, 00:17:16.614 "qid": 0, 00:17:16.614 "state": "enabled", 00:17:16.614 "thread": "nvmf_tgt_poll_group_000", 00:17:16.614 "listen_address": { 00:17:16.614 "trtype": "RDMA", 00:17:16.614 "adrfam": "IPv4", 00:17:16.614 "traddr": "192.168.100.8", 00:17:16.614 "trsvcid": "4420" 00:17:16.614 }, 00:17:16.614 "peer_address": { 00:17:16.614 "trtype": "RDMA", 00:17:16.614 "adrfam": "IPv4", 00:17:16.614 "traddr": "192.168.100.8", 00:17:16.614 "trsvcid": "49624" 00:17:16.614 }, 00:17:16.614 "auth": { 00:17:16.614 "state": "completed", 00:17:16.614 "digest": "sha256", 00:17:16.614 "dhgroup": "ffdhe6144" 00:17:16.614 } 00:17:16.614 } 00:17:16.614 ]' 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.614 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.874 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:16.874 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.874 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.874 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.874 10:24:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.874 10:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:17:17.817 10:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.817 10:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:17.817 10:24:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.817 10:24:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.817 10:24:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.817 10:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.817 10:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:17.817 10:24:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.077 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.337 00:17:18.338 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.338 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.338 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.599 { 00:17:18.599 "cntlid": 35, 00:17:18.599 "qid": 0, 00:17:18.599 "state": "enabled", 00:17:18.599 "thread": "nvmf_tgt_poll_group_000", 00:17:18.599 "listen_address": { 00:17:18.599 "trtype": "RDMA", 00:17:18.599 "adrfam": "IPv4", 00:17:18.599 "traddr": "192.168.100.8", 00:17:18.599 "trsvcid": "4420" 00:17:18.599 }, 00:17:18.599 "peer_address": { 00:17:18.599 "trtype": "RDMA", 00:17:18.599 "adrfam": "IPv4", 00:17:18.599 "traddr": "192.168.100.8", 00:17:18.599 "trsvcid": "58339" 00:17:18.599 }, 00:17:18.599 "auth": { 00:17:18.599 "state": "completed", 00:17:18.599 "digest": "sha256", 00:17:18.599 "dhgroup": "ffdhe6144" 00:17:18.599 } 00:17:18.599 } 00:17:18.599 ]' 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.599 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.860 10:24:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:17:19.431 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.692 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.693 10:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.693 10:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.954 10:24:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.954 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.954 10:24:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.215 00:17:20.215 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.215 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.215 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.215 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.215 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.215 10:24:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.215 10:24:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.475 10:24:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.475 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.475 { 00:17:20.475 "cntlid": 37, 00:17:20.475 "qid": 0, 00:17:20.475 "state": "enabled", 00:17:20.475 "thread": "nvmf_tgt_poll_group_000", 00:17:20.475 "listen_address": { 00:17:20.475 "trtype": "RDMA", 00:17:20.475 "adrfam": "IPv4", 00:17:20.475 "traddr": "192.168.100.8", 00:17:20.475 "trsvcid": "4420" 00:17:20.475 }, 00:17:20.475 "peer_address": { 00:17:20.475 "trtype": "RDMA", 00:17:20.475 "adrfam": "IPv4", 00:17:20.475 "traddr": "192.168.100.8", 00:17:20.475 "trsvcid": "35920" 00:17:20.475 }, 00:17:20.475 "auth": { 00:17:20.475 "state": "completed", 00:17:20.475 "digest": "sha256", 00:17:20.475 "dhgroup": "ffdhe6144" 00:17:20.475 } 00:17:20.475 } 00:17:20.475 ]' 00:17:20.475 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.475 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.475 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.475 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:20.475 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.475 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.475 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.475 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.736 10:24:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:17:21.308 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.569 10:24:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.140 00:17:22.140 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.140 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.140 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.140 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.140 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.140 10:24:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.140 10:24:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.140 10:24:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.140 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.140 { 00:17:22.140 "cntlid": 39, 00:17:22.140 "qid": 0, 00:17:22.140 "state": "enabled", 00:17:22.140 "thread": "nvmf_tgt_poll_group_000", 00:17:22.140 "listen_address": { 00:17:22.140 "trtype": "RDMA", 00:17:22.140 "adrfam": "IPv4", 00:17:22.140 "traddr": "192.168.100.8", 00:17:22.140 "trsvcid": "4420" 00:17:22.140 }, 00:17:22.140 "peer_address": { 00:17:22.140 "trtype": "RDMA", 00:17:22.140 "adrfam": "IPv4", 00:17:22.140 "traddr": "192.168.100.8", 00:17:22.140 "trsvcid": "43219" 00:17:22.140 }, 00:17:22.140 "auth": { 00:17:22.140 "state": "completed", 00:17:22.140 "digest": "sha256", 00:17:22.140 "dhgroup": "ffdhe6144" 00:17:22.140 } 00:17:22.140 } 00:17:22.141 ]' 00:17:22.141 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.141 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.141 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.403 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:22.403 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.403 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.403 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.403 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.403 10:24:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:17:23.348 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.348 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:23.348 10:25:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.348 10:25:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.348 10:25:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.348 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.348 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.348 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.348 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.609 10:25:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.181 00:17:24.181 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.181 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.181 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.181 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.181 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.181 10:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.181 10:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.442 10:25:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.442 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.442 { 00:17:24.442 "cntlid": 41, 00:17:24.442 "qid": 0, 00:17:24.442 "state": "enabled", 00:17:24.442 "thread": "nvmf_tgt_poll_group_000", 00:17:24.442 "listen_address": { 00:17:24.442 "trtype": "RDMA", 00:17:24.442 "adrfam": "IPv4", 00:17:24.442 "traddr": "192.168.100.8", 00:17:24.442 "trsvcid": "4420" 00:17:24.442 }, 00:17:24.442 "peer_address": { 00:17:24.442 "trtype": "RDMA", 00:17:24.442 "adrfam": "IPv4", 00:17:24.442 "traddr": "192.168.100.8", 00:17:24.442 "trsvcid": "48943" 00:17:24.442 }, 00:17:24.442 "auth": { 00:17:24.442 "state": "completed", 00:17:24.442 "digest": "sha256", 00:17:24.442 "dhgroup": "ffdhe8192" 00:17:24.442 } 00:17:24.442 } 00:17:24.442 ]' 00:17:24.442 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.442 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.442 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.442 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.442 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.442 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.442 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.442 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.703 10:25:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:17:25.274 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.536 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:25.536 10:25:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.536 10:25:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.536 10:25:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.536 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.536 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:25.536 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:25.798 10:25:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.370 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.371 { 00:17:26.371 "cntlid": 43, 00:17:26.371 "qid": 0, 00:17:26.371 "state": "enabled", 00:17:26.371 "thread": "nvmf_tgt_poll_group_000", 00:17:26.371 "listen_address": { 00:17:26.371 "trtype": "RDMA", 00:17:26.371 "adrfam": "IPv4", 00:17:26.371 "traddr": "192.168.100.8", 00:17:26.371 "trsvcid": "4420" 00:17:26.371 }, 00:17:26.371 "peer_address": { 00:17:26.371 "trtype": "RDMA", 00:17:26.371 "adrfam": "IPv4", 00:17:26.371 "traddr": "192.168.100.8", 00:17:26.371 "trsvcid": "59882" 00:17:26.371 }, 00:17:26.371 "auth": { 00:17:26.371 "state": "completed", 00:17:26.371 "digest": "sha256", 00:17:26.371 "dhgroup": "ffdhe8192" 00:17:26.371 } 00:17:26.371 } 00:17:26.371 ]' 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.371 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.631 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:26.631 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.631 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.631 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.631 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.631 10:25:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:17:27.574 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.575 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:27.575 10:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.575 10:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.575 10:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.575 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.575 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.575 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.835 10:25:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.407 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.407 { 00:17:28.407 "cntlid": 45, 00:17:28.407 "qid": 0, 00:17:28.407 "state": "enabled", 00:17:28.407 "thread": "nvmf_tgt_poll_group_000", 00:17:28.407 "listen_address": { 00:17:28.407 "trtype": "RDMA", 00:17:28.407 "adrfam": "IPv4", 00:17:28.407 "traddr": "192.168.100.8", 00:17:28.407 "trsvcid": "4420" 00:17:28.407 }, 00:17:28.407 "peer_address": { 00:17:28.407 "trtype": "RDMA", 00:17:28.407 "adrfam": "IPv4", 00:17:28.407 "traddr": "192.168.100.8", 00:17:28.407 "trsvcid": "39054" 00:17:28.407 }, 00:17:28.407 "auth": { 00:17:28.407 "state": "completed", 00:17:28.407 "digest": "sha256", 00:17:28.407 "dhgroup": "ffdhe8192" 00:17:28.407 } 00:17:28.407 } 00:17:28.407 ]' 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.407 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.711 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:28.711 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.711 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.711 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.711 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.711 10:25:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:17:29.683 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.683 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:29.683 10:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.683 10:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.684 10:25:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:30.254 00:17:30.254 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.254 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.254 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.516 { 00:17:30.516 "cntlid": 47, 00:17:30.516 "qid": 0, 00:17:30.516 "state": "enabled", 00:17:30.516 "thread": "nvmf_tgt_poll_group_000", 00:17:30.516 "listen_address": { 00:17:30.516 "trtype": "RDMA", 00:17:30.516 "adrfam": "IPv4", 00:17:30.516 "traddr": "192.168.100.8", 00:17:30.516 "trsvcid": "4420" 00:17:30.516 }, 00:17:30.516 "peer_address": { 00:17:30.516 "trtype": "RDMA", 00:17:30.516 "adrfam": "IPv4", 00:17:30.516 "traddr": "192.168.100.8", 00:17:30.516 "trsvcid": "48014" 00:17:30.516 }, 00:17:30.516 "auth": { 00:17:30.516 "state": "completed", 00:17:30.516 "digest": "sha256", 00:17:30.516 "dhgroup": "ffdhe8192" 00:17:30.516 } 00:17:30.516 } 00:17:30.516 ]' 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.516 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.777 10:25:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:17:31.717 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.718 10:25:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.978 00:17:31.978 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.978 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.978 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.238 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.238 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.238 10:25:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.238 10:25:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.238 10:25:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.238 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.238 { 00:17:32.238 "cntlid": 49, 00:17:32.238 "qid": 0, 00:17:32.238 "state": "enabled", 00:17:32.238 "thread": "nvmf_tgt_poll_group_000", 00:17:32.238 "listen_address": { 00:17:32.238 "trtype": "RDMA", 00:17:32.238 "adrfam": "IPv4", 00:17:32.238 "traddr": "192.168.100.8", 00:17:32.238 "trsvcid": "4420" 00:17:32.238 }, 00:17:32.238 "peer_address": { 00:17:32.238 "trtype": "RDMA", 00:17:32.238 "adrfam": "IPv4", 00:17:32.238 "traddr": "192.168.100.8", 00:17:32.238 "trsvcid": "53797" 00:17:32.238 }, 00:17:32.238 "auth": { 00:17:32.238 "state": "completed", 00:17:32.238 "digest": "sha384", 00:17:32.238 "dhgroup": "null" 00:17:32.238 } 00:17:32.238 } 00:17:32.238 ]' 00:17:32.238 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.238 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.238 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.238 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:32.238 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.499 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.499 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.499 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.500 10:25:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:17:33.439 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.439 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.439 10:25:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.439 10:25:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.439 10:25:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.439 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.439 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:33.439 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.707 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.967 00:17:33.967 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.967 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.967 10:25:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.967 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.967 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.967 10:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.967 10:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.967 10:25:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.967 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.967 { 00:17:33.967 "cntlid": 51, 00:17:33.967 "qid": 0, 00:17:33.967 "state": "enabled", 00:17:33.967 "thread": "nvmf_tgt_poll_group_000", 00:17:33.967 "listen_address": { 00:17:33.967 "trtype": "RDMA", 00:17:33.967 "adrfam": "IPv4", 00:17:33.967 "traddr": "192.168.100.8", 00:17:33.967 "trsvcid": "4420" 00:17:33.967 }, 00:17:33.967 "peer_address": { 00:17:33.967 "trtype": "RDMA", 00:17:33.967 "adrfam": "IPv4", 00:17:33.967 "traddr": "192.168.100.8", 00:17:33.967 "trsvcid": "52021" 00:17:33.967 }, 00:17:33.967 "auth": { 00:17:33.967 "state": "completed", 00:17:33.967 "digest": "sha384", 00:17:33.967 "dhgroup": "null" 00:17:33.967 } 00:17:33.967 } 00:17:33.967 ]' 00:17:33.967 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.967 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.226 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.226 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:34.226 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.226 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.226 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.226 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.226 10:25:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:17:35.165 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.165 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:35.165 10:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.165 10:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.165 10:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.165 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.165 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.165 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.426 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.426 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.686 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.686 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.686 10:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.686 10:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.686 10:25:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.686 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.686 { 00:17:35.686 "cntlid": 53, 00:17:35.686 "qid": 0, 00:17:35.686 "state": "enabled", 00:17:35.686 "thread": "nvmf_tgt_poll_group_000", 00:17:35.686 "listen_address": { 00:17:35.686 "trtype": "RDMA", 00:17:35.686 "adrfam": "IPv4", 00:17:35.686 "traddr": "192.168.100.8", 00:17:35.686 "trsvcid": "4420" 00:17:35.686 }, 00:17:35.686 "peer_address": { 00:17:35.687 "trtype": "RDMA", 00:17:35.687 "adrfam": "IPv4", 00:17:35.687 "traddr": "192.168.100.8", 00:17:35.687 "trsvcid": "41354" 00:17:35.687 }, 00:17:35.687 "auth": { 00:17:35.687 "state": "completed", 00:17:35.687 "digest": "sha384", 00:17:35.687 "dhgroup": "null" 00:17:35.687 } 00:17:35.687 } 00:17:35.687 ]' 00:17:35.687 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.687 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.687 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.687 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:35.687 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.947 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.947 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.947 10:25:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.947 10:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:17:36.898 10:25:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.898 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.898 10:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.898 10:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.898 10:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.898 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.898 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:36.898 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.159 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.419 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.419 { 00:17:37.419 "cntlid": 55, 00:17:37.419 "qid": 0, 00:17:37.419 "state": "enabled", 00:17:37.419 "thread": "nvmf_tgt_poll_group_000", 00:17:37.419 "listen_address": { 00:17:37.419 "trtype": "RDMA", 00:17:37.419 "adrfam": "IPv4", 00:17:37.419 "traddr": "192.168.100.8", 00:17:37.419 "trsvcid": "4420" 00:17:37.419 }, 00:17:37.419 "peer_address": { 00:17:37.419 "trtype": "RDMA", 00:17:37.419 "adrfam": "IPv4", 00:17:37.419 "traddr": "192.168.100.8", 00:17:37.419 "trsvcid": "53614" 00:17:37.419 }, 00:17:37.419 "auth": { 00:17:37.419 "state": "completed", 00:17:37.419 "digest": "sha384", 00:17:37.419 "dhgroup": "null" 00:17:37.419 } 00:17:37.419 } 00:17:37.419 ]' 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.419 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.679 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:37.679 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.679 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.679 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.679 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.679 10:25:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:17:38.621 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.621 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:38.621 10:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.621 10:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.621 10:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.621 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.621 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.621 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.621 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.882 10:25:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.882 00:17:38.882 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.882 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.882 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.143 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.143 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.143 10:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.143 10:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.143 10:25:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.143 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.143 { 00:17:39.143 "cntlid": 57, 00:17:39.143 "qid": 0, 00:17:39.143 "state": "enabled", 00:17:39.143 "thread": "nvmf_tgt_poll_group_000", 00:17:39.143 "listen_address": { 00:17:39.143 "trtype": "RDMA", 00:17:39.143 "adrfam": "IPv4", 00:17:39.143 "traddr": "192.168.100.8", 00:17:39.143 "trsvcid": "4420" 00:17:39.143 }, 00:17:39.143 "peer_address": { 00:17:39.143 "trtype": "RDMA", 00:17:39.143 "adrfam": "IPv4", 00:17:39.143 "traddr": "192.168.100.8", 00:17:39.143 "trsvcid": "49570" 00:17:39.143 }, 00:17:39.143 "auth": { 00:17:39.143 "state": "completed", 00:17:39.143 "digest": "sha384", 00:17:39.143 "dhgroup": "ffdhe2048" 00:17:39.143 } 00:17:39.143 } 00:17:39.143 ]' 00:17:39.143 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.143 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.143 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.143 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:39.143 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.404 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.404 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.404 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.404 10:25:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.344 10:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.604 10:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.604 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.604 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.604 00:17:40.604 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.605 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.605 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.865 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.865 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.865 10:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.865 10:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.865 10:25:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.865 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.865 { 00:17:40.865 "cntlid": 59, 00:17:40.865 "qid": 0, 00:17:40.865 "state": "enabled", 00:17:40.865 "thread": "nvmf_tgt_poll_group_000", 00:17:40.865 "listen_address": { 00:17:40.865 "trtype": "RDMA", 00:17:40.865 "adrfam": "IPv4", 00:17:40.865 "traddr": "192.168.100.8", 00:17:40.865 "trsvcid": "4420" 00:17:40.865 }, 00:17:40.865 "peer_address": { 00:17:40.865 "trtype": "RDMA", 00:17:40.865 "adrfam": "IPv4", 00:17:40.865 "traddr": "192.168.100.8", 00:17:40.865 "trsvcid": "40157" 00:17:40.865 }, 00:17:40.865 "auth": { 00:17:40.865 "state": "completed", 00:17:40.865 "digest": "sha384", 00:17:40.865 "dhgroup": "ffdhe2048" 00:17:40.865 } 00:17:40.865 } 00:17:40.865 ]' 00:17:40.865 10:25:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.865 10:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.865 10:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.865 10:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:40.865 10:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.125 10:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.125 10:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.125 10:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.125 10:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:17:42.065 10:25:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.065 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.065 10:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.065 10:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.065 10:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.065 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.065 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:42.065 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.325 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.325 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.586 { 00:17:42.586 "cntlid": 61, 00:17:42.586 "qid": 0, 00:17:42.586 "state": "enabled", 00:17:42.586 "thread": "nvmf_tgt_poll_group_000", 00:17:42.586 "listen_address": { 00:17:42.586 "trtype": "RDMA", 00:17:42.586 "adrfam": "IPv4", 00:17:42.586 "traddr": "192.168.100.8", 00:17:42.586 "trsvcid": "4420" 00:17:42.586 }, 00:17:42.586 "peer_address": { 00:17:42.586 "trtype": "RDMA", 00:17:42.586 "adrfam": "IPv4", 00:17:42.586 "traddr": "192.168.100.8", 00:17:42.586 "trsvcid": "39359" 00:17:42.586 }, 00:17:42.586 "auth": { 00:17:42.586 "state": "completed", 00:17:42.586 "digest": "sha384", 00:17:42.586 "dhgroup": "ffdhe2048" 00:17:42.586 } 00:17:42.586 } 00:17:42.586 ]' 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.586 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.847 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:42.847 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.847 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.847 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.847 10:25:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.847 10:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:17:43.788 10:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.789 10:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.789 10:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.789 10:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.789 10:25:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.789 10:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.789 10:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:43.789 10:25:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:44.049 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:44.050 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.050 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:44.050 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:44.050 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.050 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.050 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:44.050 10:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.050 10:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.050 10:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.050 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.050 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.310 00:17:44.310 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.310 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.310 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.570 { 00:17:44.570 "cntlid": 63, 00:17:44.570 "qid": 0, 00:17:44.570 "state": "enabled", 00:17:44.570 "thread": "nvmf_tgt_poll_group_000", 00:17:44.570 "listen_address": { 00:17:44.570 "trtype": "RDMA", 00:17:44.570 "adrfam": "IPv4", 00:17:44.570 "traddr": "192.168.100.8", 00:17:44.570 "trsvcid": "4420" 00:17:44.570 }, 00:17:44.570 "peer_address": { 00:17:44.570 "trtype": "RDMA", 00:17:44.570 "adrfam": "IPv4", 00:17:44.570 "traddr": "192.168.100.8", 00:17:44.570 "trsvcid": "54340" 00:17:44.570 }, 00:17:44.570 "auth": { 00:17:44.570 "state": "completed", 00:17:44.570 "digest": "sha384", 00:17:44.570 "dhgroup": "ffdhe2048" 00:17:44.570 } 00:17:44.570 } 00:17:44.570 ]' 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.570 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.831 10:25:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:17:45.400 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.660 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:45.660 10:25:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.660 10:25:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.660 10:25:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.660 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.660 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.660 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:45.660 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.921 10:25:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.921 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.180 { 00:17:46.180 "cntlid": 65, 00:17:46.180 "qid": 0, 00:17:46.180 "state": "enabled", 00:17:46.180 "thread": "nvmf_tgt_poll_group_000", 00:17:46.180 "listen_address": { 00:17:46.180 "trtype": "RDMA", 00:17:46.180 "adrfam": "IPv4", 00:17:46.180 "traddr": "192.168.100.8", 00:17:46.180 "trsvcid": "4420" 00:17:46.180 }, 00:17:46.180 "peer_address": { 00:17:46.180 "trtype": "RDMA", 00:17:46.180 "adrfam": "IPv4", 00:17:46.180 "traddr": "192.168.100.8", 00:17:46.180 "trsvcid": "38989" 00:17:46.180 }, 00:17:46.180 "auth": { 00:17:46.180 "state": "completed", 00:17:46.180 "digest": "sha384", 00:17:46.180 "dhgroup": "ffdhe3072" 00:17:46.180 } 00:17:46.180 } 00:17:46.180 ]' 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.180 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:46.439 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.439 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.439 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.439 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.440 10:25:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:17:47.396 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.396 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:47.396 10:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.396 10:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.396 10:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.396 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.396 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.396 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.661 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.921 00:17:47.921 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.921 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.921 10:25:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.182 { 00:17:48.182 "cntlid": 67, 00:17:48.182 "qid": 0, 00:17:48.182 "state": "enabled", 00:17:48.182 "thread": "nvmf_tgt_poll_group_000", 00:17:48.182 "listen_address": { 00:17:48.182 "trtype": "RDMA", 00:17:48.182 "adrfam": "IPv4", 00:17:48.182 "traddr": "192.168.100.8", 00:17:48.182 "trsvcid": "4420" 00:17:48.182 }, 00:17:48.182 "peer_address": { 00:17:48.182 "trtype": "RDMA", 00:17:48.182 "adrfam": "IPv4", 00:17:48.182 "traddr": "192.168.100.8", 00:17:48.182 "trsvcid": "57903" 00:17:48.182 }, 00:17:48.182 "auth": { 00:17:48.182 "state": "completed", 00:17:48.182 "digest": "sha384", 00:17:48.182 "dhgroup": "ffdhe3072" 00:17:48.182 } 00:17:48.182 } 00:17:48.182 ]' 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.182 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.442 10:25:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.382 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.642 00:17:49.642 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.642 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.642 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.900 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.900 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.900 10:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.900 10:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.900 10:25:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.900 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.900 { 00:17:49.900 "cntlid": 69, 00:17:49.900 "qid": 0, 00:17:49.900 "state": "enabled", 00:17:49.900 "thread": "nvmf_tgt_poll_group_000", 00:17:49.900 "listen_address": { 00:17:49.900 "trtype": "RDMA", 00:17:49.900 "adrfam": "IPv4", 00:17:49.900 "traddr": "192.168.100.8", 00:17:49.900 "trsvcid": "4420" 00:17:49.900 }, 00:17:49.900 "peer_address": { 00:17:49.900 "trtype": "RDMA", 00:17:49.900 "adrfam": "IPv4", 00:17:49.900 "traddr": "192.168.100.8", 00:17:49.900 "trsvcid": "56059" 00:17:49.900 }, 00:17:49.900 "auth": { 00:17:49.900 "state": "completed", 00:17:49.900 "digest": "sha384", 00:17:49.900 "dhgroup": "ffdhe3072" 00:17:49.900 } 00:17:49.900 } 00:17:49.900 ]' 00:17:49.900 10:25:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.900 10:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.900 10:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.900 10:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.900 10:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.215 10:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.215 10:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.215 10:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.215 10:25:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:17:51.153 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.153 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.153 10:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.153 10:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.153 10:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.153 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.153 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:51.153 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.414 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.678 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.679 { 00:17:51.679 "cntlid": 71, 00:17:51.679 "qid": 0, 00:17:51.679 "state": "enabled", 00:17:51.679 "thread": "nvmf_tgt_poll_group_000", 00:17:51.679 "listen_address": { 00:17:51.679 "trtype": "RDMA", 00:17:51.679 "adrfam": "IPv4", 00:17:51.679 "traddr": "192.168.100.8", 00:17:51.679 "trsvcid": "4420" 00:17:51.679 }, 00:17:51.679 "peer_address": { 00:17:51.679 "trtype": "RDMA", 00:17:51.679 "adrfam": "IPv4", 00:17:51.679 "traddr": "192.168.100.8", 00:17:51.679 "trsvcid": "38879" 00:17:51.679 }, 00:17:51.679 "auth": { 00:17:51.679 "state": "completed", 00:17:51.679 "digest": "sha384", 00:17:51.679 "dhgroup": "ffdhe3072" 00:17:51.679 } 00:17:51.679 } 00:17:51.679 ]' 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.679 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.940 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.940 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.940 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.940 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.940 10:25:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.940 10:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:17:52.881 10:25:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.881 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:52.881 10:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.881 10:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.881 10:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.881 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.881 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.881 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:52.881 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.141 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.402 00:17:53.402 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.402 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.402 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:53.663 { 00:17:53.663 "cntlid": 73, 00:17:53.663 "qid": 0, 00:17:53.663 "state": "enabled", 00:17:53.663 "thread": "nvmf_tgt_poll_group_000", 00:17:53.663 "listen_address": { 00:17:53.663 "trtype": "RDMA", 00:17:53.663 "adrfam": "IPv4", 00:17:53.663 "traddr": "192.168.100.8", 00:17:53.663 "trsvcid": "4420" 00:17:53.663 }, 00:17:53.663 "peer_address": { 00:17:53.663 "trtype": "RDMA", 00:17:53.663 "adrfam": "IPv4", 00:17:53.663 "traddr": "192.168.100.8", 00:17:53.663 "trsvcid": "54590" 00:17:53.663 }, 00:17:53.663 "auth": { 00:17:53.663 "state": "completed", 00:17:53.663 "digest": "sha384", 00:17:53.663 "dhgroup": "ffdhe4096" 00:17:53.663 } 00:17:53.663 } 00:17:53.663 ]' 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.663 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.922 10:25:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:17:54.861 10:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.861 10:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:54.861 10:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.861 10:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.861 10:25:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.861 10:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:54.861 10:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:54.861 10:25:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.121 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.381 00:17:55.381 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.381 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.381 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.381 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.381 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.381 10:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.381 10:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.381 10:25:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.381 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.381 { 00:17:55.381 "cntlid": 75, 00:17:55.381 "qid": 0, 00:17:55.381 "state": "enabled", 00:17:55.381 "thread": "nvmf_tgt_poll_group_000", 00:17:55.381 "listen_address": { 00:17:55.381 "trtype": "RDMA", 00:17:55.381 "adrfam": "IPv4", 00:17:55.381 "traddr": "192.168.100.8", 00:17:55.381 "trsvcid": "4420" 00:17:55.382 }, 00:17:55.382 "peer_address": { 00:17:55.382 "trtype": "RDMA", 00:17:55.382 "adrfam": "IPv4", 00:17:55.382 "traddr": "192.168.100.8", 00:17:55.382 "trsvcid": "57974" 00:17:55.382 }, 00:17:55.382 "auth": { 00:17:55.382 "state": "completed", 00:17:55.382 "digest": "sha384", 00:17:55.382 "dhgroup": "ffdhe4096" 00:17:55.382 } 00:17:55.382 } 00:17:55.382 ]' 00:17:55.382 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.382 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:55.382 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.642 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.642 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.642 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.642 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.642 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.642 10:25:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:17:56.588 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.588 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:56.588 10:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.588 10:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.851 10:25:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.112 00:17:57.112 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.112 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.112 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.373 { 00:17:57.373 "cntlid": 77, 00:17:57.373 "qid": 0, 00:17:57.373 "state": "enabled", 00:17:57.373 "thread": "nvmf_tgt_poll_group_000", 00:17:57.373 "listen_address": { 00:17:57.373 "trtype": "RDMA", 00:17:57.373 "adrfam": "IPv4", 00:17:57.373 "traddr": "192.168.100.8", 00:17:57.373 "trsvcid": "4420" 00:17:57.373 }, 00:17:57.373 "peer_address": { 00:17:57.373 "trtype": "RDMA", 00:17:57.373 "adrfam": "IPv4", 00:17:57.373 "traddr": "192.168.100.8", 00:17:57.373 "trsvcid": "54930" 00:17:57.373 }, 00:17:57.373 "auth": { 00:17:57.373 "state": "completed", 00:17:57.373 "digest": "sha384", 00:17:57.373 "dhgroup": "ffdhe4096" 00:17:57.373 } 00:17:57.373 } 00:17:57.373 ]' 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.373 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.634 10:25:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:17:58.599 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.599 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.599 10:25:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.599 10:25:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.599 10:25:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.599 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.599 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:58.599 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.890 10:25:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.890 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.151 { 00:17:59.151 "cntlid": 79, 00:17:59.151 "qid": 0, 00:17:59.151 "state": "enabled", 00:17:59.151 "thread": "nvmf_tgt_poll_group_000", 00:17:59.151 "listen_address": { 00:17:59.151 "trtype": "RDMA", 00:17:59.151 "adrfam": "IPv4", 00:17:59.151 "traddr": "192.168.100.8", 00:17:59.151 "trsvcid": "4420" 00:17:59.151 }, 00:17:59.151 "peer_address": { 00:17:59.151 "trtype": "RDMA", 00:17:59.151 "adrfam": "IPv4", 00:17:59.151 "traddr": "192.168.100.8", 00:17:59.151 "trsvcid": "34005" 00:17:59.151 }, 00:17:59.151 "auth": { 00:17:59.151 "state": "completed", 00:17:59.151 "digest": "sha384", 00:17:59.151 "dhgroup": "ffdhe4096" 00:17:59.151 } 00:17:59.151 } 00:17:59.151 ]' 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.151 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.412 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.412 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.412 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.412 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.412 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.412 10:25:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:18:00.351 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.611 10:25:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.208 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.208 { 00:18:01.208 "cntlid": 81, 00:18:01.208 "qid": 0, 00:18:01.208 "state": "enabled", 00:18:01.208 "thread": "nvmf_tgt_poll_group_000", 00:18:01.208 "listen_address": { 00:18:01.208 "trtype": "RDMA", 00:18:01.208 "adrfam": "IPv4", 00:18:01.208 "traddr": "192.168.100.8", 00:18:01.208 "trsvcid": "4420" 00:18:01.208 }, 00:18:01.208 "peer_address": { 00:18:01.208 "trtype": "RDMA", 00:18:01.208 "adrfam": "IPv4", 00:18:01.208 "traddr": "192.168.100.8", 00:18:01.208 "trsvcid": "58091" 00:18:01.208 }, 00:18:01.208 "auth": { 00:18:01.208 "state": "completed", 00:18:01.208 "digest": "sha384", 00:18:01.208 "dhgroup": "ffdhe6144" 00:18:01.208 } 00:18:01.208 } 00:18:01.208 ]' 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.208 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.468 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.468 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.468 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.468 10:25:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:18:02.408 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.408 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:02.408 10:25:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.408 10:25:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.408 10:25:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.408 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.408 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:02.408 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.668 10:25:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.929 00:18:02.929 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.929 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.929 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.190 { 00:18:03.190 "cntlid": 83, 00:18:03.190 "qid": 0, 00:18:03.190 "state": "enabled", 00:18:03.190 "thread": "nvmf_tgt_poll_group_000", 00:18:03.190 "listen_address": { 00:18:03.190 "trtype": "RDMA", 00:18:03.190 "adrfam": "IPv4", 00:18:03.190 "traddr": "192.168.100.8", 00:18:03.190 "trsvcid": "4420" 00:18:03.190 }, 00:18:03.190 "peer_address": { 00:18:03.190 "trtype": "RDMA", 00:18:03.190 "adrfam": "IPv4", 00:18:03.190 "traddr": "192.168.100.8", 00:18:03.190 "trsvcid": "51678" 00:18:03.190 }, 00:18:03.190 "auth": { 00:18:03.190 "state": "completed", 00:18:03.190 "digest": "sha384", 00:18:03.190 "dhgroup": "ffdhe6144" 00:18:03.190 } 00:18:03.190 } 00:18:03.190 ]' 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.190 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.450 10:25:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:18:04.389 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.389 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.389 10:25:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.389 10:25:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.389 10:25:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.389 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.389 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.389 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.650 10:25:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.909 00:18:04.909 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.909 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.909 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.170 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.170 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.171 { 00:18:05.171 "cntlid": 85, 00:18:05.171 "qid": 0, 00:18:05.171 "state": "enabled", 00:18:05.171 "thread": "nvmf_tgt_poll_group_000", 00:18:05.171 "listen_address": { 00:18:05.171 "trtype": "RDMA", 00:18:05.171 "adrfam": "IPv4", 00:18:05.171 "traddr": "192.168.100.8", 00:18:05.171 "trsvcid": "4420" 00:18:05.171 }, 00:18:05.171 "peer_address": { 00:18:05.171 "trtype": "RDMA", 00:18:05.171 "adrfam": "IPv4", 00:18:05.171 "traddr": "192.168.100.8", 00:18:05.171 "trsvcid": "51135" 00:18:05.171 }, 00:18:05.171 "auth": { 00:18:05.171 "state": "completed", 00:18:05.171 "digest": "sha384", 00:18:05.171 "dhgroup": "ffdhe6144" 00:18:05.171 } 00:18:05.171 } 00:18:05.171 ]' 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.171 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.434 10:25:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:18:06.385 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.385 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:06.385 10:25:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.385 10:25:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.385 10:25:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.385 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.385 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.385 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.645 10:25:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.905 00:18:06.905 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.905 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.905 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.167 { 00:18:07.167 "cntlid": 87, 00:18:07.167 "qid": 0, 00:18:07.167 "state": "enabled", 00:18:07.167 "thread": "nvmf_tgt_poll_group_000", 00:18:07.167 "listen_address": { 00:18:07.167 "trtype": "RDMA", 00:18:07.167 "adrfam": "IPv4", 00:18:07.167 "traddr": "192.168.100.8", 00:18:07.167 "trsvcid": "4420" 00:18:07.167 }, 00:18:07.167 "peer_address": { 00:18:07.167 "trtype": "RDMA", 00:18:07.167 "adrfam": "IPv4", 00:18:07.167 "traddr": "192.168.100.8", 00:18:07.167 "trsvcid": "50982" 00:18:07.167 }, 00:18:07.167 "auth": { 00:18:07.167 "state": "completed", 00:18:07.167 "digest": "sha384", 00:18:07.167 "dhgroup": "ffdhe6144" 00:18:07.167 } 00:18:07.167 } 00:18:07.167 ]' 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.167 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.427 10:25:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:18:08.367 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.367 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.367 10:25:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.367 10:25:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.367 10:25:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.367 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:08.367 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.367 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.367 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.626 10:25:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:09.195 00:18:09.195 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.195 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.195 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.195 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.195 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.195 10:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.195 10:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.195 10:25:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.195 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.195 { 00:18:09.195 "cntlid": 89, 00:18:09.195 "qid": 0, 00:18:09.195 "state": "enabled", 00:18:09.195 "thread": "nvmf_tgt_poll_group_000", 00:18:09.195 "listen_address": { 00:18:09.195 "trtype": "RDMA", 00:18:09.195 "adrfam": "IPv4", 00:18:09.195 "traddr": "192.168.100.8", 00:18:09.195 "trsvcid": "4420" 00:18:09.195 }, 00:18:09.195 "peer_address": { 00:18:09.195 "trtype": "RDMA", 00:18:09.195 "adrfam": "IPv4", 00:18:09.195 "traddr": "192.168.100.8", 00:18:09.195 "trsvcid": "43173" 00:18:09.195 }, 00:18:09.195 "auth": { 00:18:09.195 "state": "completed", 00:18:09.195 "digest": "sha384", 00:18:09.195 "dhgroup": "ffdhe8192" 00:18:09.195 } 00:18:09.195 } 00:18:09.195 ]' 00:18:09.195 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.454 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.454 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.454 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.454 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.454 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.454 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.454 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.713 10:25:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:18:10.296 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.556 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:10.556 10:25:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.556 10:25:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.556 10:25:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.556 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.556 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:10.556 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.816 10:25:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.385 00:18:11.385 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.385 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.385 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.385 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.385 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.386 10:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.386 10:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.386 10:25:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.386 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.386 { 00:18:11.386 "cntlid": 91, 00:18:11.386 "qid": 0, 00:18:11.386 "state": "enabled", 00:18:11.386 "thread": "nvmf_tgt_poll_group_000", 00:18:11.386 "listen_address": { 00:18:11.386 "trtype": "RDMA", 00:18:11.386 "adrfam": "IPv4", 00:18:11.386 "traddr": "192.168.100.8", 00:18:11.386 "trsvcid": "4420" 00:18:11.386 }, 00:18:11.386 "peer_address": { 00:18:11.386 "trtype": "RDMA", 00:18:11.386 "adrfam": "IPv4", 00:18:11.386 "traddr": "192.168.100.8", 00:18:11.386 "trsvcid": "46692" 00:18:11.386 }, 00:18:11.386 "auth": { 00:18:11.386 "state": "completed", 00:18:11.386 "digest": "sha384", 00:18:11.386 "dhgroup": "ffdhe8192" 00:18:11.386 } 00:18:11.386 } 00:18:11.386 ]' 00:18:11.386 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.386 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.386 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.386 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.386 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.645 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.645 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.645 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.645 10:25:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:18:12.584 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.585 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:12.585 10:25:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.585 10:25:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.585 10:25:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.585 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.585 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:12.585 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.845 10:25:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.415 00:18:13.415 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.415 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.415 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.415 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.415 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.415 10:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.415 10:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.415 10:25:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.415 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.415 { 00:18:13.415 "cntlid": 93, 00:18:13.415 "qid": 0, 00:18:13.415 "state": "enabled", 00:18:13.415 "thread": "nvmf_tgt_poll_group_000", 00:18:13.415 "listen_address": { 00:18:13.415 "trtype": "RDMA", 00:18:13.415 "adrfam": "IPv4", 00:18:13.415 "traddr": "192.168.100.8", 00:18:13.415 "trsvcid": "4420" 00:18:13.415 }, 00:18:13.415 "peer_address": { 00:18:13.415 "trtype": "RDMA", 00:18:13.415 "adrfam": "IPv4", 00:18:13.415 "traddr": "192.168.100.8", 00:18:13.415 "trsvcid": "39169" 00:18:13.415 }, 00:18:13.415 "auth": { 00:18:13.415 "state": "completed", 00:18:13.415 "digest": "sha384", 00:18:13.415 "dhgroup": "ffdhe8192" 00:18:13.415 } 00:18:13.415 } 00:18:13.415 ]' 00:18:13.415 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.675 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.675 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.675 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.675 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.675 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.675 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.675 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.936 10:25:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:18:14.878 10:25:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.878 10:25:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:14.878 10:25:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.878 10:25:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.878 10:25:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.878 10:25:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.878 10:25:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:14.878 10:25:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:14.878 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.450 00:18:15.450 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.450 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.450 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.710 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.710 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.710 10:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.710 10:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.710 10:25:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.710 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.710 { 00:18:15.710 "cntlid": 95, 00:18:15.710 "qid": 0, 00:18:15.710 "state": "enabled", 00:18:15.710 "thread": "nvmf_tgt_poll_group_000", 00:18:15.710 "listen_address": { 00:18:15.710 "trtype": "RDMA", 00:18:15.710 "adrfam": "IPv4", 00:18:15.710 "traddr": "192.168.100.8", 00:18:15.710 "trsvcid": "4420" 00:18:15.710 }, 00:18:15.710 "peer_address": { 00:18:15.710 "trtype": "RDMA", 00:18:15.710 "adrfam": "IPv4", 00:18:15.710 "traddr": "192.168.100.8", 00:18:15.710 "trsvcid": "49936" 00:18:15.710 }, 00:18:15.710 "auth": { 00:18:15.710 "state": "completed", 00:18:15.710 "digest": "sha384", 00:18:15.710 "dhgroup": "ffdhe8192" 00:18:15.711 } 00:18:15.711 } 00:18:15.711 ]' 00:18:15.711 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.711 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.711 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.711 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:15.711 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.711 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.711 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.711 10:25:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.971 10:25:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:18:16.911 10:25:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.911 10:25:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:16.911 10:25:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.911 10:25:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.911 10:25:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.911 10:25:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:16.911 10:25:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.911 10:25:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.911 10:25:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:16.911 10:25:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.171 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.431 00:18:17.431 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.431 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.431 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.431 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.431 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.431 10:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.431 10:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.431 10:25:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.431 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.431 { 00:18:17.431 "cntlid": 97, 00:18:17.431 "qid": 0, 00:18:17.431 "state": "enabled", 00:18:17.431 "thread": "nvmf_tgt_poll_group_000", 00:18:17.431 "listen_address": { 00:18:17.431 "trtype": "RDMA", 00:18:17.431 "adrfam": "IPv4", 00:18:17.431 "traddr": "192.168.100.8", 00:18:17.431 "trsvcid": "4420" 00:18:17.431 }, 00:18:17.431 "peer_address": { 00:18:17.431 "trtype": "RDMA", 00:18:17.431 "adrfam": "IPv4", 00:18:17.431 "traddr": "192.168.100.8", 00:18:17.431 "trsvcid": "58732" 00:18:17.431 }, 00:18:17.431 "auth": { 00:18:17.431 "state": "completed", 00:18:17.431 "digest": "sha512", 00:18:17.431 "dhgroup": "null" 00:18:17.431 } 00:18:17.431 } 00:18:17.431 ]' 00:18:17.431 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.692 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.692 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.692 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:17.692 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.692 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.692 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.692 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.952 10:25:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:18:18.893 10:25:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.893 10:25:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:18.893 10:25:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.893 10:25:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.893 10:25:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.893 10:25:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.893 10:25:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:18.893 10:25:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.893 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.154 00:18:19.154 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.154 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.154 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.416 { 00:18:19.416 "cntlid": 99, 00:18:19.416 "qid": 0, 00:18:19.416 "state": "enabled", 00:18:19.416 "thread": "nvmf_tgt_poll_group_000", 00:18:19.416 "listen_address": { 00:18:19.416 "trtype": "RDMA", 00:18:19.416 "adrfam": "IPv4", 00:18:19.416 "traddr": "192.168.100.8", 00:18:19.416 "trsvcid": "4420" 00:18:19.416 }, 00:18:19.416 "peer_address": { 00:18:19.416 "trtype": "RDMA", 00:18:19.416 "adrfam": "IPv4", 00:18:19.416 "traddr": "192.168.100.8", 00:18:19.416 "trsvcid": "52758" 00:18:19.416 }, 00:18:19.416 "auth": { 00:18:19.416 "state": "completed", 00:18:19.416 "digest": "sha512", 00:18:19.416 "dhgroup": "null" 00:18:19.416 } 00:18:19.416 } 00:18:19.416 ]' 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.416 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.677 10:25:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:18:20.617 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.617 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:20.617 10:25:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.617 10:25:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.617 10:25:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.617 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.617 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:20.617 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.877 10:25:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:21.137 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.137 { 00:18:21.137 "cntlid": 101, 00:18:21.137 "qid": 0, 00:18:21.137 "state": "enabled", 00:18:21.137 "thread": "nvmf_tgt_poll_group_000", 00:18:21.137 "listen_address": { 00:18:21.137 "trtype": "RDMA", 00:18:21.137 "adrfam": "IPv4", 00:18:21.137 "traddr": "192.168.100.8", 00:18:21.137 "trsvcid": "4420" 00:18:21.137 }, 00:18:21.137 "peer_address": { 00:18:21.137 "trtype": "RDMA", 00:18:21.137 "adrfam": "IPv4", 00:18:21.137 "traddr": "192.168.100.8", 00:18:21.137 "trsvcid": "47449" 00:18:21.137 }, 00:18:21.137 "auth": { 00:18:21.137 "state": "completed", 00:18:21.137 "digest": "sha512", 00:18:21.137 "dhgroup": "null" 00:18:21.137 } 00:18:21.137 } 00:18:21.137 ]' 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.137 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.397 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:21.397 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.397 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.397 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.397 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.657 10:25:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:18:22.227 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.487 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.487 10:25:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.487 10:25:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.487 10:25:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.487 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.487 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:22.487 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.747 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.747 00:18:23.006 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.006 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.006 10:25:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.006 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.006 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.006 10:26:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.006 10:26:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.006 10:26:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.006 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.006 { 00:18:23.006 "cntlid": 103, 00:18:23.006 "qid": 0, 00:18:23.006 "state": "enabled", 00:18:23.006 "thread": "nvmf_tgt_poll_group_000", 00:18:23.006 "listen_address": { 00:18:23.006 "trtype": "RDMA", 00:18:23.006 "adrfam": "IPv4", 00:18:23.006 "traddr": "192.168.100.8", 00:18:23.006 "trsvcid": "4420" 00:18:23.006 }, 00:18:23.006 "peer_address": { 00:18:23.006 "trtype": "RDMA", 00:18:23.006 "adrfam": "IPv4", 00:18:23.006 "traddr": "192.168.100.8", 00:18:23.006 "trsvcid": "54460" 00:18:23.006 }, 00:18:23.006 "auth": { 00:18:23.006 "state": "completed", 00:18:23.006 "digest": "sha512", 00:18:23.006 "dhgroup": "null" 00:18:23.006 } 00:18:23.006 } 00:18:23.006 ]' 00:18:23.006 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.006 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.006 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.266 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:23.266 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.266 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.266 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.266 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.266 10:26:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:18:24.204 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.205 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.205 10:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.205 10:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.464 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.724 00:18:24.724 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.724 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.724 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.984 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.984 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.984 10:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.984 10:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.984 10:26:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.984 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.984 { 00:18:24.984 "cntlid": 105, 00:18:24.984 "qid": 0, 00:18:24.984 "state": "enabled", 00:18:24.984 "thread": "nvmf_tgt_poll_group_000", 00:18:24.984 "listen_address": { 00:18:24.984 "trtype": "RDMA", 00:18:24.984 "adrfam": "IPv4", 00:18:24.984 "traddr": "192.168.100.8", 00:18:24.984 "trsvcid": "4420" 00:18:24.984 }, 00:18:24.984 "peer_address": { 00:18:24.984 "trtype": "RDMA", 00:18:24.984 "adrfam": "IPv4", 00:18:24.984 "traddr": "192.168.100.8", 00:18:24.984 "trsvcid": "54960" 00:18:24.984 }, 00:18:24.984 "auth": { 00:18:24.984 "state": "completed", 00:18:24.984 "digest": "sha512", 00:18:24.984 "dhgroup": "ffdhe2048" 00:18:24.984 } 00:18:24.984 } 00:18:24.984 ]' 00:18:24.984 10:26:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.984 10:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.984 10:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.984 10:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:24.984 10:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.984 10:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.984 10:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.984 10:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.243 10:26:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:18:26.182 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.182 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.182 10:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.182 10:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.182 10:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.182 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.182 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:26.182 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.442 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.702 00:18:26.702 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.702 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.702 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.702 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.702 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.702 10:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.702 10:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.702 10:26:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.702 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.702 { 00:18:26.702 "cntlid": 107, 00:18:26.702 "qid": 0, 00:18:26.702 "state": "enabled", 00:18:26.702 "thread": "nvmf_tgt_poll_group_000", 00:18:26.702 "listen_address": { 00:18:26.702 "trtype": "RDMA", 00:18:26.702 "adrfam": "IPv4", 00:18:26.702 "traddr": "192.168.100.8", 00:18:26.702 "trsvcid": "4420" 00:18:26.702 }, 00:18:26.702 "peer_address": { 00:18:26.702 "trtype": "RDMA", 00:18:26.702 "adrfam": "IPv4", 00:18:26.702 "traddr": "192.168.100.8", 00:18:26.702 "trsvcid": "48809" 00:18:26.702 }, 00:18:26.702 "auth": { 00:18:26.702 "state": "completed", 00:18:26.702 "digest": "sha512", 00:18:26.702 "dhgroup": "ffdhe2048" 00:18:26.702 } 00:18:26.702 } 00:18:26.702 ]' 00:18:26.702 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.703 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.703 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.963 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.963 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.963 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.963 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.963 10:26:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.963 10:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:18:27.902 10:26:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.902 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.902 10:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.902 10:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.903 10:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.903 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.903 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:27.903 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.163 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.423 00:18:28.423 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.423 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.423 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.683 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.683 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.683 10:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.684 10:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.684 10:26:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.684 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.684 { 00:18:28.684 "cntlid": 109, 00:18:28.684 "qid": 0, 00:18:28.684 "state": "enabled", 00:18:28.684 "thread": "nvmf_tgt_poll_group_000", 00:18:28.684 "listen_address": { 00:18:28.684 "trtype": "RDMA", 00:18:28.684 "adrfam": "IPv4", 00:18:28.684 "traddr": "192.168.100.8", 00:18:28.684 "trsvcid": "4420" 00:18:28.684 }, 00:18:28.684 "peer_address": { 00:18:28.684 "trtype": "RDMA", 00:18:28.684 "adrfam": "IPv4", 00:18:28.684 "traddr": "192.168.100.8", 00:18:28.684 "trsvcid": "57082" 00:18:28.684 }, 00:18:28.684 "auth": { 00:18:28.684 "state": "completed", 00:18:28.684 "digest": "sha512", 00:18:28.684 "dhgroup": "ffdhe2048" 00:18:28.684 } 00:18:28.684 } 00:18:28.684 ]' 00:18:28.684 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.684 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.684 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.684 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.684 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.684 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.684 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.684 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.947 10:26:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:18:29.582 10:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.845 10:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:29.845 10:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.845 10:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.845 10:26:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.845 10:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.845 10:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:29.845 10:26:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.107 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.107 00:18:30.368 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.368 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.369 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.369 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.369 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.369 10:26:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.369 10:26:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.369 10:26:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.369 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.369 { 00:18:30.369 "cntlid": 111, 00:18:30.369 "qid": 0, 00:18:30.369 "state": "enabled", 00:18:30.369 "thread": "nvmf_tgt_poll_group_000", 00:18:30.369 "listen_address": { 00:18:30.369 "trtype": "RDMA", 00:18:30.369 "adrfam": "IPv4", 00:18:30.369 "traddr": "192.168.100.8", 00:18:30.369 "trsvcid": "4420" 00:18:30.369 }, 00:18:30.369 "peer_address": { 00:18:30.369 "trtype": "RDMA", 00:18:30.369 "adrfam": "IPv4", 00:18:30.369 "traddr": "192.168.100.8", 00:18:30.369 "trsvcid": "36046" 00:18:30.369 }, 00:18:30.369 "auth": { 00:18:30.369 "state": "completed", 00:18:30.369 "digest": "sha512", 00:18:30.369 "dhgroup": "ffdhe2048" 00:18:30.369 } 00:18:30.369 } 00:18:30.369 ]' 00:18:30.369 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.369 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.369 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.630 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:30.630 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.630 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.631 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.631 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.631 10:26:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:18:31.575 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.836 10:26:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.097 00:18:32.097 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.097 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.097 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.357 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.357 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.357 10:26:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.357 10:26:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.357 10:26:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.357 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.357 { 00:18:32.357 "cntlid": 113, 00:18:32.357 "qid": 0, 00:18:32.357 "state": "enabled", 00:18:32.358 "thread": "nvmf_tgt_poll_group_000", 00:18:32.358 "listen_address": { 00:18:32.358 "trtype": "RDMA", 00:18:32.358 "adrfam": "IPv4", 00:18:32.358 "traddr": "192.168.100.8", 00:18:32.358 "trsvcid": "4420" 00:18:32.358 }, 00:18:32.358 "peer_address": { 00:18:32.358 "trtype": "RDMA", 00:18:32.358 "adrfam": "IPv4", 00:18:32.358 "traddr": "192.168.100.8", 00:18:32.358 "trsvcid": "41867" 00:18:32.358 }, 00:18:32.358 "auth": { 00:18:32.358 "state": "completed", 00:18:32.358 "digest": "sha512", 00:18:32.358 "dhgroup": "ffdhe3072" 00:18:32.358 } 00:18:32.358 } 00:18:32.358 ]' 00:18:32.358 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.358 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.358 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.358 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.358 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.358 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.358 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.358 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.617 10:26:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:18:33.556 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.556 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.556 10:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.556 10:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.556 10:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.556 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.556 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:33.556 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.815 10:26:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.076 00:18:34.076 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.076 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.076 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.076 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.076 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.076 10:26:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.076 10:26:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.076 10:26:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.076 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.076 { 00:18:34.076 "cntlid": 115, 00:18:34.076 "qid": 0, 00:18:34.076 "state": "enabled", 00:18:34.076 "thread": "nvmf_tgt_poll_group_000", 00:18:34.076 "listen_address": { 00:18:34.076 "trtype": "RDMA", 00:18:34.076 "adrfam": "IPv4", 00:18:34.076 "traddr": "192.168.100.8", 00:18:34.076 "trsvcid": "4420" 00:18:34.076 }, 00:18:34.076 "peer_address": { 00:18:34.076 "trtype": "RDMA", 00:18:34.076 "adrfam": "IPv4", 00:18:34.076 "traddr": "192.168.100.8", 00:18:34.076 "trsvcid": "43067" 00:18:34.076 }, 00:18:34.076 "auth": { 00:18:34.076 "state": "completed", 00:18:34.076 "digest": "sha512", 00:18:34.076 "dhgroup": "ffdhe3072" 00:18:34.076 } 00:18:34.076 } 00:18:34.076 ]' 00:18:34.076 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.336 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.336 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.336 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.336 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.336 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.336 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.336 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.336 10:26:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:18:35.275 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.275 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:35.275 10:26:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.275 10:26:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.535 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.796 00:18:35.796 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.796 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.796 10:26:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.057 { 00:18:36.057 "cntlid": 117, 00:18:36.057 "qid": 0, 00:18:36.057 "state": "enabled", 00:18:36.057 "thread": "nvmf_tgt_poll_group_000", 00:18:36.057 "listen_address": { 00:18:36.057 "trtype": "RDMA", 00:18:36.057 "adrfam": "IPv4", 00:18:36.057 "traddr": "192.168.100.8", 00:18:36.057 "trsvcid": "4420" 00:18:36.057 }, 00:18:36.057 "peer_address": { 00:18:36.057 "trtype": "RDMA", 00:18:36.057 "adrfam": "IPv4", 00:18:36.057 "traddr": "192.168.100.8", 00:18:36.057 "trsvcid": "49850" 00:18:36.057 }, 00:18:36.057 "auth": { 00:18:36.057 "state": "completed", 00:18:36.057 "digest": "sha512", 00:18:36.057 "dhgroup": "ffdhe3072" 00:18:36.057 } 00:18:36.057 } 00:18:36.057 ]' 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.057 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.317 10:26:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:18:37.259 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.259 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:37.259 10:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.259 10:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 10:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.259 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.259 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.259 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.520 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.781 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.781 { 00:18:37.781 "cntlid": 119, 00:18:37.781 "qid": 0, 00:18:37.781 "state": "enabled", 00:18:37.781 "thread": "nvmf_tgt_poll_group_000", 00:18:37.781 "listen_address": { 00:18:37.781 "trtype": "RDMA", 00:18:37.781 "adrfam": "IPv4", 00:18:37.781 "traddr": "192.168.100.8", 00:18:37.781 "trsvcid": "4420" 00:18:37.781 }, 00:18:37.781 "peer_address": { 00:18:37.781 "trtype": "RDMA", 00:18:37.781 "adrfam": "IPv4", 00:18:37.781 "traddr": "192.168.100.8", 00:18:37.781 "trsvcid": "37173" 00:18:37.781 }, 00:18:37.781 "auth": { 00:18:37.781 "state": "completed", 00:18:37.781 "digest": "sha512", 00:18:37.781 "dhgroup": "ffdhe3072" 00:18:37.781 } 00:18:37.781 } 00:18:37.781 ]' 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.781 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.042 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:38.042 10:26:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.042 10:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.042 10:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.042 10:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.042 10:26:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:18:38.983 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.983 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.983 10:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.983 10:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.245 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.505 00:18:39.505 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.505 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.505 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.766 { 00:18:39.766 "cntlid": 121, 00:18:39.766 "qid": 0, 00:18:39.766 "state": "enabled", 00:18:39.766 "thread": "nvmf_tgt_poll_group_000", 00:18:39.766 "listen_address": { 00:18:39.766 "trtype": "RDMA", 00:18:39.766 "adrfam": "IPv4", 00:18:39.766 "traddr": "192.168.100.8", 00:18:39.766 "trsvcid": "4420" 00:18:39.766 }, 00:18:39.766 "peer_address": { 00:18:39.766 "trtype": "RDMA", 00:18:39.766 "adrfam": "IPv4", 00:18:39.766 "traddr": "192.168.100.8", 00:18:39.766 "trsvcid": "51597" 00:18:39.766 }, 00:18:39.766 "auth": { 00:18:39.766 "state": "completed", 00:18:39.766 "digest": "sha512", 00:18:39.766 "dhgroup": "ffdhe4096" 00:18:39.766 } 00:18:39.766 } 00:18:39.766 ]' 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.766 10:26:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.027 10:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:18:40.967 10:26:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.967 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.967 10:26:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.967 10:26:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.967 10:26:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.967 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.967 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:40.967 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.227 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.487 00:18:41.487 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.487 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.487 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.487 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.487 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.487 10:26:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.487 10:26:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.747 10:26:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.747 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.747 { 00:18:41.747 "cntlid": 123, 00:18:41.747 "qid": 0, 00:18:41.747 "state": "enabled", 00:18:41.747 "thread": "nvmf_tgt_poll_group_000", 00:18:41.747 "listen_address": { 00:18:41.747 "trtype": "RDMA", 00:18:41.747 "adrfam": "IPv4", 00:18:41.747 "traddr": "192.168.100.8", 00:18:41.747 "trsvcid": "4420" 00:18:41.747 }, 00:18:41.747 "peer_address": { 00:18:41.747 "trtype": "RDMA", 00:18:41.747 "adrfam": "IPv4", 00:18:41.747 "traddr": "192.168.100.8", 00:18:41.747 "trsvcid": "47036" 00:18:41.747 }, 00:18:41.747 "auth": { 00:18:41.747 "state": "completed", 00:18:41.747 "digest": "sha512", 00:18:41.747 "dhgroup": "ffdhe4096" 00:18:41.747 } 00:18:41.747 } 00:18:41.747 ]' 00:18:41.747 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.747 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.747 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.747 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:41.747 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.747 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.747 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.747 10:26:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.006 10:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:18:42.946 10:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.946 10:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:42.946 10:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.946 10:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.947 10:26:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.947 10:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.947 10:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:42.947 10:26:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.947 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.207 00:18:43.207 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.467 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.467 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.467 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.467 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.467 10:26:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.467 10:26:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.467 10:26:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.467 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.467 { 00:18:43.467 "cntlid": 125, 00:18:43.467 "qid": 0, 00:18:43.467 "state": "enabled", 00:18:43.467 "thread": "nvmf_tgt_poll_group_000", 00:18:43.467 "listen_address": { 00:18:43.467 "trtype": "RDMA", 00:18:43.467 "adrfam": "IPv4", 00:18:43.467 "traddr": "192.168.100.8", 00:18:43.467 "trsvcid": "4420" 00:18:43.467 }, 00:18:43.467 "peer_address": { 00:18:43.467 "trtype": "RDMA", 00:18:43.467 "adrfam": "IPv4", 00:18:43.467 "traddr": "192.168.100.8", 00:18:43.467 "trsvcid": "35171" 00:18:43.467 }, 00:18:43.467 "auth": { 00:18:43.467 "state": "completed", 00:18:43.467 "digest": "sha512", 00:18:43.467 "dhgroup": "ffdhe4096" 00:18:43.467 } 00:18:43.467 } 00:18:43.467 ]' 00:18:43.467 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.467 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.467 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.727 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:43.727 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.727 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.727 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.727 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.727 10:26:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:18:44.668 10:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.668 10:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:44.668 10:26:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.668 10:26:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.668 10:26:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.668 10:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.668 10:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.668 10:26:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:44.930 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.190 00:18:45.190 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.190 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.190 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.450 { 00:18:45.450 "cntlid": 127, 00:18:45.450 "qid": 0, 00:18:45.450 "state": "enabled", 00:18:45.450 "thread": "nvmf_tgt_poll_group_000", 00:18:45.450 "listen_address": { 00:18:45.450 "trtype": "RDMA", 00:18:45.450 "adrfam": "IPv4", 00:18:45.450 "traddr": "192.168.100.8", 00:18:45.450 "trsvcid": "4420" 00:18:45.450 }, 00:18:45.450 "peer_address": { 00:18:45.450 "trtype": "RDMA", 00:18:45.450 "adrfam": "IPv4", 00:18:45.450 "traddr": "192.168.100.8", 00:18:45.450 "trsvcid": "38367" 00:18:45.450 }, 00:18:45.450 "auth": { 00:18:45.450 "state": "completed", 00:18:45.450 "digest": "sha512", 00:18:45.450 "dhgroup": "ffdhe4096" 00:18:45.450 } 00:18:45.450 } 00:18:45.450 ]' 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.450 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.710 10:26:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:18:46.653 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.653 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.653 10:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.653 10:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.653 10:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.653 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.653 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.653 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.653 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:46.914 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:46.914 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.915 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:46.915 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:46.915 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:46.915 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.915 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.915 10:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.915 10:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.915 10:26:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.915 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:46.915 10:26:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.176 00:18:47.176 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.176 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.176 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.437 { 00:18:47.437 "cntlid": 129, 00:18:47.437 "qid": 0, 00:18:47.437 "state": "enabled", 00:18:47.437 "thread": "nvmf_tgt_poll_group_000", 00:18:47.437 "listen_address": { 00:18:47.437 "trtype": "RDMA", 00:18:47.437 "adrfam": "IPv4", 00:18:47.437 "traddr": "192.168.100.8", 00:18:47.437 "trsvcid": "4420" 00:18:47.437 }, 00:18:47.437 "peer_address": { 00:18:47.437 "trtype": "RDMA", 00:18:47.437 "adrfam": "IPv4", 00:18:47.437 "traddr": "192.168.100.8", 00:18:47.437 "trsvcid": "49944" 00:18:47.437 }, 00:18:47.437 "auth": { 00:18:47.437 "state": "completed", 00:18:47.437 "digest": "sha512", 00:18:47.437 "dhgroup": "ffdhe6144" 00:18:47.437 } 00:18:47.437 } 00:18:47.437 ]' 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.437 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.698 10:26:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:18:48.640 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.641 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.641 10:26:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.641 10:26:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.641 10:26:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.641 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.641 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:48.641 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.902 10:26:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.162 00:18:49.162 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.162 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.162 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.424 { 00:18:49.424 "cntlid": 131, 00:18:49.424 "qid": 0, 00:18:49.424 "state": "enabled", 00:18:49.424 "thread": "nvmf_tgt_poll_group_000", 00:18:49.424 "listen_address": { 00:18:49.424 "trtype": "RDMA", 00:18:49.424 "adrfam": "IPv4", 00:18:49.424 "traddr": "192.168.100.8", 00:18:49.424 "trsvcid": "4420" 00:18:49.424 }, 00:18:49.424 "peer_address": { 00:18:49.424 "trtype": "RDMA", 00:18:49.424 "adrfam": "IPv4", 00:18:49.424 "traddr": "192.168.100.8", 00:18:49.424 "trsvcid": "35688" 00:18:49.424 }, 00:18:49.424 "auth": { 00:18:49.424 "state": "completed", 00:18:49.424 "digest": "sha512", 00:18:49.424 "dhgroup": "ffdhe6144" 00:18:49.424 } 00:18:49.424 } 00:18:49.424 ]' 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.424 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.685 10:26:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.630 10:26:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.203 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.203 { 00:18:51.203 "cntlid": 133, 00:18:51.203 "qid": 0, 00:18:51.203 "state": "enabled", 00:18:51.203 "thread": "nvmf_tgt_poll_group_000", 00:18:51.203 "listen_address": { 00:18:51.203 "trtype": "RDMA", 00:18:51.203 "adrfam": "IPv4", 00:18:51.203 "traddr": "192.168.100.8", 00:18:51.203 "trsvcid": "4420" 00:18:51.203 }, 00:18:51.203 "peer_address": { 00:18:51.203 "trtype": "RDMA", 00:18:51.203 "adrfam": "IPv4", 00:18:51.203 "traddr": "192.168.100.8", 00:18:51.203 "trsvcid": "58854" 00:18:51.203 }, 00:18:51.203 "auth": { 00:18:51.203 "state": "completed", 00:18:51.203 "digest": "sha512", 00:18:51.203 "dhgroup": "ffdhe6144" 00:18:51.203 } 00:18:51.203 } 00:18:51.203 ]' 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:51.203 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.463 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:51.463 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.463 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.463 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.463 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.464 10:26:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:18:52.403 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.403 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.403 10:26:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.403 10:26:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.663 10:26:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.923 00:18:52.923 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.923 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.923 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.185 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.185 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.185 10:26:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.185 10:26:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.185 10:26:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.185 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.185 { 00:18:53.185 "cntlid": 135, 00:18:53.185 "qid": 0, 00:18:53.185 "state": "enabled", 00:18:53.185 "thread": "nvmf_tgt_poll_group_000", 00:18:53.185 "listen_address": { 00:18:53.185 "trtype": "RDMA", 00:18:53.185 "adrfam": "IPv4", 00:18:53.185 "traddr": "192.168.100.8", 00:18:53.185 "trsvcid": "4420" 00:18:53.185 }, 00:18:53.185 "peer_address": { 00:18:53.185 "trtype": "RDMA", 00:18:53.185 "adrfam": "IPv4", 00:18:53.185 "traddr": "192.168.100.8", 00:18:53.185 "trsvcid": "52730" 00:18:53.185 }, 00:18:53.185 "auth": { 00:18:53.185 "state": "completed", 00:18:53.185 "digest": "sha512", 00:18:53.185 "dhgroup": "ffdhe6144" 00:18:53.185 } 00:18:53.185 } 00:18:53.185 ]' 00:18:53.185 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.185 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.185 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.185 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:53.185 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.446 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.446 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.446 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.446 10:26:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:18:54.387 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.387 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:54.387 10:26:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.387 10:26:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.387 10:26:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.387 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.387 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.387 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:54.387 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.648 10:26:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.218 00:18:55.218 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.218 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.219 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.219 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.219 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.219 10:26:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.219 10:26:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.219 10:26:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.219 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.219 { 00:18:55.219 "cntlid": 137, 00:18:55.219 "qid": 0, 00:18:55.219 "state": "enabled", 00:18:55.219 "thread": "nvmf_tgt_poll_group_000", 00:18:55.219 "listen_address": { 00:18:55.219 "trtype": "RDMA", 00:18:55.219 "adrfam": "IPv4", 00:18:55.219 "traddr": "192.168.100.8", 00:18:55.219 "trsvcid": "4420" 00:18:55.219 }, 00:18:55.219 "peer_address": { 00:18:55.219 "trtype": "RDMA", 00:18:55.219 "adrfam": "IPv4", 00:18:55.219 "traddr": "192.168.100.8", 00:18:55.219 "trsvcid": "40686" 00:18:55.219 }, 00:18:55.219 "auth": { 00:18:55.219 "state": "completed", 00:18:55.219 "digest": "sha512", 00:18:55.219 "dhgroup": "ffdhe8192" 00:18:55.219 } 00:18:55.219 } 00:18:55.219 ]' 00:18:55.219 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.479 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.479 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.479 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.479 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.479 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.479 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.479 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.739 10:26:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.679 10:26:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.250 00:18:57.250 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.250 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.250 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.512 { 00:18:57.512 "cntlid": 139, 00:18:57.512 "qid": 0, 00:18:57.512 "state": "enabled", 00:18:57.512 "thread": "nvmf_tgt_poll_group_000", 00:18:57.512 "listen_address": { 00:18:57.512 "trtype": "RDMA", 00:18:57.512 "adrfam": "IPv4", 00:18:57.512 "traddr": "192.168.100.8", 00:18:57.512 "trsvcid": "4420" 00:18:57.512 }, 00:18:57.512 "peer_address": { 00:18:57.512 "trtype": "RDMA", 00:18:57.512 "adrfam": "IPv4", 00:18:57.512 "traddr": "192.168.100.8", 00:18:57.512 "trsvcid": "54572" 00:18:57.512 }, 00:18:57.512 "auth": { 00:18:57.512 "state": "completed", 00:18:57.512 "digest": "sha512", 00:18:57.512 "dhgroup": "ffdhe8192" 00:18:57.512 } 00:18:57.512 } 00:18:57.512 ]' 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.512 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.772 10:26:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:ZDU3OTcxNWM5MzI2NDQ3MWY5MTUxYmVkYmMzMDliNziVtd3P: --dhchap-ctrl-secret DHHC-1:02:YTA2ZGE1ZDcwYTRlYTEyMWZlMjE3MmRjZDliMTQ1M2I4NTE2YWNiYjE2MjRhZjMysjNqcg==: 00:18:58.714 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.714 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:58.714 10:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.714 10:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.714 10:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.714 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.714 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.714 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.975 10:26:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:59.558 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.558 { 00:18:59.558 "cntlid": 141, 00:18:59.558 "qid": 0, 00:18:59.558 "state": "enabled", 00:18:59.558 "thread": "nvmf_tgt_poll_group_000", 00:18:59.558 "listen_address": { 00:18:59.558 "trtype": "RDMA", 00:18:59.558 "adrfam": "IPv4", 00:18:59.558 "traddr": "192.168.100.8", 00:18:59.558 "trsvcid": "4420" 00:18:59.558 }, 00:18:59.558 "peer_address": { 00:18:59.558 "trtype": "RDMA", 00:18:59.558 "adrfam": "IPv4", 00:18:59.558 "traddr": "192.168.100.8", 00:18:59.558 "trsvcid": "52468" 00:18:59.558 }, 00:18:59.558 "auth": { 00:18:59.558 "state": "completed", 00:18:59.558 "digest": "sha512", 00:18:59.558 "dhgroup": "ffdhe8192" 00:18:59.558 } 00:18:59.558 } 00:18:59.558 ]' 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.558 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.863 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.863 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.863 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.863 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.863 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.863 10:26:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:NDhmMWQ2ZDkyMmJjMTdkNTczZTg2MjQ4MGZiMWFjMzI2MWRjYWU2ZTU3Y2Y2MGI0kyBPIg==: --dhchap-ctrl-secret DHHC-1:01:YjU5OWFhYmE5MjhjMDhlYjQ5NWM0MmMyYWJmNThmZmU4NTAg: 00:19:00.830 10:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.830 10:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:00.830 10:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.830 10:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.830 10:26:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.830 10:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.830 10:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.830 10:26:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.090 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.692 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.692 { 00:19:01.692 "cntlid": 143, 00:19:01.692 "qid": 0, 00:19:01.692 "state": "enabled", 00:19:01.692 "thread": "nvmf_tgt_poll_group_000", 00:19:01.692 "listen_address": { 00:19:01.692 "trtype": "RDMA", 00:19:01.692 "adrfam": "IPv4", 00:19:01.692 "traddr": "192.168.100.8", 00:19:01.692 "trsvcid": "4420" 00:19:01.692 }, 00:19:01.692 "peer_address": { 00:19:01.692 "trtype": "RDMA", 00:19:01.692 "adrfam": "IPv4", 00:19:01.692 "traddr": "192.168.100.8", 00:19:01.692 "trsvcid": "44811" 00:19:01.692 }, 00:19:01.692 "auth": { 00:19:01.692 "state": "completed", 00:19:01.692 "digest": "sha512", 00:19:01.692 "dhgroup": "ffdhe8192" 00:19:01.692 } 00:19:01.692 } 00:19:01.692 ]' 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.692 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.952 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.952 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.953 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.953 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.953 10:26:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.953 10:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:19:02.893 10:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.893 10:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.893 10:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.893 10:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.893 10:26:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.893 10:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:02.893 10:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:02.893 10:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:02.893 10:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:02.893 10:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:02.893 10:26:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.154 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.726 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.726 { 00:19:03.726 "cntlid": 145, 00:19:03.726 "qid": 0, 00:19:03.726 "state": "enabled", 00:19:03.726 "thread": "nvmf_tgt_poll_group_000", 00:19:03.726 "listen_address": { 00:19:03.726 "trtype": "RDMA", 00:19:03.726 "adrfam": "IPv4", 00:19:03.726 "traddr": "192.168.100.8", 00:19:03.726 "trsvcid": "4420" 00:19:03.726 }, 00:19:03.726 "peer_address": { 00:19:03.726 "trtype": "RDMA", 00:19:03.726 "adrfam": "IPv4", 00:19:03.726 "traddr": "192.168.100.8", 00:19:03.726 "trsvcid": "50611" 00:19:03.726 }, 00:19:03.726 "auth": { 00:19:03.726 "state": "completed", 00:19:03.726 "digest": "sha512", 00:19:03.726 "dhgroup": "ffdhe8192" 00:19:03.726 } 00:19:03.726 } 00:19:03.726 ]' 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.726 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.986 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.986 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.986 10:26:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.987 10:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:MTllYTZiZmFmMzlhYzFlNGQ5N2QzOTFiNTUzY2FlNmE3MGJmOTc1N2YzODJkYmEybZjEEQ==: --dhchap-ctrl-secret DHHC-1:03:MmNmNDgzNzYyMmYxNThhZWI5ZjdhNDBmYTdlMTczZmVjOTE3Yjg2NGI1NzU0ZjY3MWVmYjY5YzEwN2UxMzNmZRyZR80=: 00:19:04.929 10:26:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:04.929 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:04.930 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.930 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:04.930 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.930 10:26:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:04.930 10:26:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:37.032 request: 00:19:37.032 { 00:19:37.032 "name": "nvme0", 00:19:37.032 "trtype": "rdma", 00:19:37.032 "traddr": "192.168.100.8", 00:19:37.032 "adrfam": "ipv4", 00:19:37.032 "trsvcid": "4420", 00:19:37.032 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:37.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:37.032 "prchk_reftag": false, 00:19:37.032 "prchk_guard": false, 00:19:37.032 "hdgst": false, 00:19:37.032 "ddgst": false, 00:19:37.032 "dhchap_key": "key2", 00:19:37.032 "method": "bdev_nvme_attach_controller", 00:19:37.032 "req_id": 1 00:19:37.032 } 00:19:37.032 Got JSON-RPC error response 00:19:37.032 response: 00:19:37.032 { 00:19:37.032 "code": -5, 00:19:37.032 "message": "Input/output error" 00:19:37.032 } 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:37.032 10:27:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:37.032 request: 00:19:37.032 { 00:19:37.032 "name": "nvme0", 00:19:37.032 "trtype": "rdma", 00:19:37.032 "traddr": "192.168.100.8", 00:19:37.032 "adrfam": "ipv4", 00:19:37.032 "trsvcid": "4420", 00:19:37.032 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:37.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:37.032 "prchk_reftag": false, 00:19:37.032 "prchk_guard": false, 00:19:37.032 "hdgst": false, 00:19:37.032 "ddgst": false, 00:19:37.032 "dhchap_key": "key1", 00:19:37.032 "dhchap_ctrlr_key": "ckey2", 00:19:37.032 "method": "bdev_nvme_attach_controller", 00:19:37.032 "req_id": 1 00:19:37.032 } 00:19:37.032 Got JSON-RPC error response 00:19:37.032 response: 00:19:37.032 { 00:19:37.032 "code": -5, 00:19:37.032 "message": "Input/output error" 00:19:37.032 } 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.032 10:27:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.133 request: 00:20:09.133 { 00:20:09.133 "name": "nvme0", 00:20:09.133 "trtype": "rdma", 00:20:09.133 "traddr": "192.168.100.8", 00:20:09.133 "adrfam": "ipv4", 00:20:09.133 "trsvcid": "4420", 00:20:09.133 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:09.133 "prchk_reftag": false, 00:20:09.133 "prchk_guard": false, 00:20:09.133 "hdgst": false, 00:20:09.133 "ddgst": false, 00:20:09.133 "dhchap_key": "key1", 00:20:09.133 "dhchap_ctrlr_key": "ckey1", 00:20:09.133 "method": "bdev_nvme_attach_controller", 00:20:09.133 "req_id": 1 00:20:09.133 } 00:20:09.133 Got JSON-RPC error response 00:20:09.133 response: 00:20:09.133 { 00:20:09.133 "code": -5, 00:20:09.133 "message": "Input/output error" 00:20:09.133 } 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2923927 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2923927 ']' 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2923927 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2923927 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2923927' 00:20:09.133 killing process with pid 2923927 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2923927 00:20:09.133 10:27:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2923927 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2964625 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2964625 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2964625 ']' 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.133 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2964625 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 2964625 ']' 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.134 10:27:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.134 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.134 { 00:20:09.134 "cntlid": 1, 00:20:09.134 "qid": 0, 00:20:09.134 "state": "enabled", 00:20:09.134 "thread": "nvmf_tgt_poll_group_000", 00:20:09.134 "listen_address": { 00:20:09.134 "trtype": "RDMA", 00:20:09.134 "adrfam": "IPv4", 00:20:09.134 "traddr": "192.168.100.8", 00:20:09.134 "trsvcid": "4420" 00:20:09.134 }, 00:20:09.134 "peer_address": { 00:20:09.134 "trtype": "RDMA", 00:20:09.134 "adrfam": "IPv4", 00:20:09.134 "traddr": "192.168.100.8", 00:20:09.134 "trsvcid": "58086" 00:20:09.134 }, 00:20:09.134 "auth": { 00:20:09.134 "state": "completed", 00:20:09.134 "digest": "sha512", 00:20:09.134 "dhgroup": "ffdhe8192" 00:20:09.134 } 00:20:09.134 } 00:20:09.134 ]' 00:20:09.134 10:27:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.134 10:27:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.134 10:27:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.134 10:27:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.134 10:27:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.134 10:27:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.134 10:27:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.134 10:27:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.134 10:27:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:NWI2Njg1NjQ2ZWM2Y2NlYmM0YTMyYzNkZDRiYTNlYzdiYmM3NTViNjMxYjViZjE5NTc1NGFjOWQwZDY3NDM3OWUz75w=: 00:20:10.073 10:27:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.073 10:27:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:10.073 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.073 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.073 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.073 10:27:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:10.073 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.073 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.073 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.073 10:27:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:10.073 10:27:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:10.332 10:27:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.332 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:10.332 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.332 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:10.332 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.332 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:10.332 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:10.332 10:27:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.333 10:27:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.441 request: 00:20:42.441 { 00:20:42.441 "name": "nvme0", 00:20:42.441 "trtype": "rdma", 00:20:42.441 "traddr": "192.168.100.8", 00:20:42.441 "adrfam": "ipv4", 00:20:42.441 "trsvcid": "4420", 00:20:42.441 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:42.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:42.441 "prchk_reftag": false, 00:20:42.441 "prchk_guard": false, 00:20:42.441 "hdgst": false, 00:20:42.441 "ddgst": false, 00:20:42.441 "dhchap_key": "key3", 00:20:42.441 "method": "bdev_nvme_attach_controller", 00:20:42.441 "req_id": 1 00:20:42.441 } 00:20:42.441 Got JSON-RPC error response 00:20:42.441 response: 00:20:42.441 { 00:20:42.441 "code": -5, 00:20:42.441 "message": "Input/output error" 00:20:42.441 } 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.441 10:28:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.588 request: 00:21:14.588 { 00:21:14.588 "name": "nvme0", 00:21:14.588 "trtype": "rdma", 00:21:14.588 "traddr": "192.168.100.8", 00:21:14.588 "adrfam": "ipv4", 00:21:14.588 "trsvcid": "4420", 00:21:14.588 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:14.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:14.588 "prchk_reftag": false, 00:21:14.588 "prchk_guard": false, 00:21:14.588 "hdgst": false, 00:21:14.588 "ddgst": false, 00:21:14.588 "dhchap_key": "key3", 00:21:14.588 "method": "bdev_nvme_attach_controller", 00:21:14.588 "req_id": 1 00:21:14.588 } 00:21:14.588 Got JSON-RPC error response 00:21:14.588 response: 00:21:14.588 { 00:21:14.588 "code": -5, 00:21:14.588 "message": "Input/output error" 00:21:14.588 } 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:14.588 request: 00:21:14.588 { 00:21:14.588 "name": "nvme0", 00:21:14.588 "trtype": "rdma", 00:21:14.588 "traddr": "192.168.100.8", 00:21:14.588 "adrfam": "ipv4", 00:21:14.588 "trsvcid": "4420", 00:21:14.588 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:14.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:21:14.588 "prchk_reftag": false, 00:21:14.588 "prchk_guard": false, 00:21:14.588 "hdgst": false, 00:21:14.588 "ddgst": false, 00:21:14.588 "dhchap_key": "key0", 00:21:14.588 "dhchap_ctrlr_key": "key1", 00:21:14.588 "method": "bdev_nvme_attach_controller", 00:21:14.588 "req_id": 1 00:21:14.588 } 00:21:14.588 Got JSON-RPC error response 00:21:14.588 response: 00:21:14.588 { 00:21:14.588 "code": -5, 00:21:14.588 "message": "Input/output error" 00:21:14.588 } 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:14.588 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2923963 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2923963 ']' 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2923963 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.588 10:28:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2923963 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2923963' 00:21:14.588 killing process with pid 2923963 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2923963 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2923963 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:14.588 rmmod nvme_rdma 00:21:14.588 rmmod nvme_fabrics 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2964625 ']' 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2964625 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 2964625 ']' 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 2964625 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:14.588 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:14.589 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2964625 00:21:14.589 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:14.589 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:14.589 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2964625' 00:21:14.589 killing process with pid 2964625 00:21:14.589 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 2964625 00:21:14.589 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 2964625 00:21:14.589 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:14.589 10:28:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:14.589 10:28:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.zLy /tmp/spdk.key-sha256.VoM /tmp/spdk.key-sha384.trZ /tmp/spdk.key-sha512.Nvf /tmp/spdk.key-sha512.TRk /tmp/spdk.key-sha384.Vej /tmp/spdk.key-sha256.zKA '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:21:14.589 00:21:14.589 real 4m36.354s 00:21:14.589 user 9m46.885s 00:21:14.589 sys 0m18.468s 00:21:14.589 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:14.589 10:28:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.589 ************************************ 00:21:14.589 END TEST nvmf_auth_target 00:21:14.589 ************************************ 00:21:14.589 10:28:49 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:14.589 10:28:49 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:21:14.589 10:28:49 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:14.589 10:28:49 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:14.589 10:28:49 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:21:14.589 10:28:49 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:21:14.589 10:28:49 nvmf_rdma -- nvmf/nvmf.sh@81 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:21:14.589 10:28:49 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:14.589 10:28:49 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:14.589 10:28:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:14.589 ************************************ 00:21:14.589 START TEST nvmf_srq_overwhelm 00:21:14.589 ************************************ 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:21:14.589 * Looking for test storage... 00:21:14.589 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:21:14.589 10:28:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:21.184 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:21.184 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:21.184 Found net devices under 0000:98:00.0: mlx_0_0 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:21.184 Found net devices under 0000:98:00.1: mlx_0_1 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:21.184 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:21.185 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:21.185 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:21.185 altname enp152s0f0np0 00:21:21.185 altname ens817f0np0 00:21:21.185 inet 192.168.100.8/24 scope global mlx_0_0 00:21:21.185 valid_lft forever preferred_lft forever 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:21.185 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:21.185 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:21.185 altname enp152s0f1np1 00:21:21.185 altname ens817f1np1 00:21:21.185 inet 192.168.100.9/24 scope global mlx_0_1 00:21:21.185 valid_lft forever preferred_lft forever 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:21.185 192.168.100.9' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:21.185 192.168.100.9' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:21.185 192.168.100.9' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=2981417 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 2981417 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@829 -- # '[' -z 2981417 ']' 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.185 10:28:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.185 [2024-07-15 10:28:57.735610] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:21.185 [2024-07-15 10:28:57.735682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.185 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.185 [2024-07-15 10:28:57.807642] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.185 [2024-07-15 10:28:57.883498] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.185 [2024-07-15 10:28:57.883536] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.185 [2024-07-15 10:28:57.883544] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.185 [2024-07-15 10:28:57.883550] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.185 [2024-07-15 10:28:57.883556] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.185 [2024-07-15 10:28:57.883700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.185 [2024-07-15 10:28:57.883801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.185 [2024-07-15 10:28:57.883960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:21.185 [2024-07-15 10:28:57.883961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.447 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.447 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@862 -- # return 0 00:21:21.447 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.447 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.447 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.447 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.447 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:21:21.447 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.447 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.447 [2024-07-15 10:28:58.601735] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd0f200/0xd136f0) succeed. 00:21:21.447 [2024-07-15 10:28:58.616227] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd10840/0xd54d80) succeed. 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.708 Malloc0 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:21.708 [2024-07-15 10:28:58.718851] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.708 10:28:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:23.145 Malloc1 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.145 10:29:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:24.525 Malloc2 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.525 10:29:01 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.909 10:29:02 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:25.909 Malloc3 00:21:25.909 10:29:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.909 10:29:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:25.909 10:29:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.909 10:29:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:25.909 10:29:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.909 10:29:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:25.909 10:29:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.909 10:29:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:25.909 10:29:03 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.909 10:29:03 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:27.296 Malloc4 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.296 10:29:04 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:29.209 Malloc5 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.209 10:29:05 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:30.625 10:29:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:21:30.625 10:29:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:21:30.625 10:29:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:21:30.625 10:29:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:21:30.625 10:29:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:21:30.625 10:29:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:21:30.625 10:29:07 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:21:30.625 10:29:07 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:21:30.625 [global] 00:21:30.625 thread=1 00:21:30.625 invalidate=1 00:21:30.625 rw=read 00:21:30.625 time_based=1 00:21:30.625 runtime=10 00:21:30.625 ioengine=libaio 00:21:30.625 direct=1 00:21:30.625 bs=1048576 00:21:30.625 iodepth=128 00:21:30.625 norandommap=1 00:21:30.625 numjobs=13 00:21:30.625 00:21:30.625 [job0] 00:21:30.625 filename=/dev/nvme0n1 00:21:30.625 [job1] 00:21:30.625 filename=/dev/nvme1n1 00:21:30.625 [job2] 00:21:30.625 filename=/dev/nvme2n1 00:21:30.625 [job3] 00:21:30.625 filename=/dev/nvme3n1 00:21:30.625 [job4] 00:21:30.625 filename=/dev/nvme4n1 00:21:30.625 [job5] 00:21:30.625 filename=/dev/nvme5n1 00:21:30.625 Could not set queue depth (nvme0n1) 00:21:30.625 Could not set queue depth (nvme1n1) 00:21:30.625 Could not set queue depth (nvme2n1) 00:21:30.625 Could not set queue depth (nvme3n1) 00:21:30.625 Could not set queue depth (nvme4n1) 00:21:30.625 Could not set queue depth (nvme5n1) 00:21:30.891 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:30.891 ... 00:21:30.891 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:30.891 ... 00:21:30.891 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:30.891 ... 00:21:30.891 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:30.891 ... 00:21:30.891 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:30.891 ... 00:21:30.891 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:21:30.891 ... 00:21:30.891 fio-3.35 00:21:30.891 Starting 78 threads 00:21:43.142 00:21:43.142 job0: (groupid=0, jobs=1): err= 0: pid=2983677: Mon Jul 15 10:29:18 2024 00:21:43.142 read: IOPS=19, BW=19.2MiB/s (20.2MB/s)(197MiB/10237msec) 00:21:43.142 slat (usec): min=25, max=2153.9k, avg=51833.18, stdev=277512.37 00:21:43.142 clat (msec): min=25, max=7687, avg=5135.25, stdev=2672.18 00:21:43.142 lat (msec): min=1015, max=7698, avg=5187.09, stdev=2638.07 00:21:43.142 clat percentiles (msec): 00:21:43.142 | 1.00th=[ 1003], 5.00th=[ 1083], 10.00th=[ 1200], 20.00th=[ 1536], 00:21:43.142 | 30.00th=[ 2869], 40.00th=[ 5671], 50.00th=[ 7080], 60.00th=[ 7148], 00:21:43.142 | 70.00th=[ 7282], 80.00th=[ 7349], 90.00th=[ 7550], 95.00th=[ 7617], 00:21:43.142 | 99.00th=[ 7684], 99.50th=[ 7684], 99.90th=[ 7684], 99.95th=[ 7684], 00:21:43.142 | 99.99th=[ 7684] 00:21:43.142 bw ( KiB/s): min= 2048, max=71680, per=0.62%, avg=28262.40, stdev=27757.77, samples=5 00:21:43.142 iops : min= 2, max= 70, avg=27.60, stdev=27.11, samples=5 00:21:43.142 lat (msec) : 50=0.51%, 2000=25.38%, >=2000=74.11% 00:21:43.142 cpu : usr=0.00%, sys=0.53%, ctx=369, majf=0, minf=32769 00:21:43.142 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.1%, 16=8.1%, 32=16.2%, >=64=68.0% 00:21:43.142 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.142 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:21:43.142 issued rwts: total=197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.142 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.142 job0: (groupid=0, jobs=1): err= 0: pid=2983678: Mon Jul 15 10:29:18 2024 00:21:43.142 read: IOPS=88, BW=88.6MiB/s (92.9MB/s)(889MiB/10030msec) 00:21:43.143 slat (usec): min=23, max=2116.4k, avg=11244.15, stdev=109811.81 00:21:43.143 clat (msec): min=28, max=6363, avg=921.66, stdev=1331.86 00:21:43.143 lat (msec): min=30, max=6372, avg=932.90, stdev=1344.74 00:21:43.143 clat percentiles (msec): 00:21:43.143 | 1.00th=[ 37], 5.00th=[ 113], 10.00th=[ 226], 20.00th=[ 464], 00:21:43.143 | 30.00th=[ 535], 40.00th=[ 600], 50.00th=[ 634], 60.00th=[ 659], 00:21:43.143 | 70.00th=[ 701], 80.00th=[ 743], 90.00th=[ 802], 95.00th=[ 5000], 00:21:43.143 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6342], 99.95th=[ 6342], 00:21:43.143 | 99.99th=[ 6342] 00:21:43.143 bw ( KiB/s): min=49152, max=249357, per=3.89%, avg=178104.71, stdev=62382.78, samples=7 00:21:43.143 iops : min= 48, max= 243, avg=173.86, stdev=60.82, samples=7 00:21:43.143 lat (msec) : 50=1.80%, 100=2.02%, 250=6.52%, 500=11.47%, 750=58.72% 00:21:43.143 lat (msec) : 1000=12.26%, >=2000=7.20% 00:21:43.143 cpu : usr=0.05%, sys=1.51%, ctx=846, majf=0, minf=32769 00:21:43.143 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:21:43.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.143 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.143 issued rwts: total=889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.143 job0: (groupid=0, jobs=1): err= 0: pid=2983679: Mon Jul 15 10:29:18 2024 00:21:43.143 read: IOPS=64, BW=64.7MiB/s (67.9MB/s)(667MiB/10302msec) 00:21:43.143 slat (usec): min=30, max=2117.7k, avg=15425.76, stdev=135220.04 00:21:43.143 clat (msec): min=9, max=6386, avg=1662.51, stdev=1751.14 00:21:43.143 lat (msec): min=361, max=6405, avg=1677.94, stdev=1755.64 00:21:43.143 clat percentiles (msec): 00:21:43.143 | 1.00th=[ 363], 5.00th=[ 380], 10.00th=[ 393], 20.00th=[ 397], 00:21:43.143 | 30.00th=[ 397], 40.00th=[ 426], 50.00th=[ 447], 60.00th=[ 493], 00:21:43.143 | 70.00th=[ 2769], 80.00th=[ 3943], 90.00th=[ 4530], 95.00th=[ 4665], 00:21:43.143 | 99.00th=[ 4732], 99.50th=[ 6342], 99.90th=[ 6409], 99.95th=[ 6409], 00:21:43.143 | 99.99th=[ 6409] 00:21:43.143 bw ( KiB/s): min=36937, max=313344, per=4.01%, avg=183921.33, stdev=113425.91, samples=6 00:21:43.143 iops : min= 36, max= 306, avg=179.50, stdev=110.76, samples=6 00:21:43.143 lat (msec) : 10=0.15%, 500=60.27%, 750=3.15%, 1000=1.50%, 2000=1.35% 00:21:43.143 lat (msec) : >=2000=33.58% 00:21:43.143 cpu : usr=0.02%, sys=0.95%, ctx=792, majf=0, minf=32769 00:21:43.143 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:21:43.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.143 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.143 issued rwts: total=667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.143 job0: (groupid=0, jobs=1): err= 0: pid=2983680: Mon Jul 15 10:29:18 2024 00:21:43.143 read: IOPS=3, BW=3295KiB/s (3374kB/s)(33.0MiB/10255msec) 00:21:43.143 slat (msec): min=4, max=2091, avg=309.44, stdev=715.80 00:21:43.143 clat (msec): min=42, max=10207, avg=5297.90, stdev=3084.94 00:21:43.143 lat (msec): min=2097, max=10254, avg=5607.34, stdev=3053.36 00:21:43.143 clat percentiles (msec): 00:21:43.143 | 1.00th=[ 44], 5.00th=[ 2106], 10.00th=[ 2106], 20.00th=[ 2123], 00:21:43.143 | 30.00th=[ 2140], 40.00th=[ 4279], 50.00th=[ 4329], 60.00th=[ 6409], 00:21:43.143 | 70.00th=[ 6477], 80.00th=[ 8557], 90.00th=[10134], 95.00th=[10134], 00:21:43.143 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:43.143 | 99.99th=[10268] 00:21:43.143 lat (msec) : 50=3.03%, >=2000=96.97% 00:21:43.143 cpu : usr=0.00%, sys=0.21%, ctx=90, majf=0, minf=8449 00:21:43.143 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:21:43.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.143 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:21:43.143 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.143 job0: (groupid=0, jobs=1): err= 0: pid=2983681: Mon Jul 15 10:29:18 2024 00:21:43.143 read: IOPS=27, BW=27.8MiB/s (29.1MB/s)(285MiB/10266msec) 00:21:43.143 slat (usec): min=30, max=2117.6k, avg=35927.23, stdev=226487.25 00:21:43.143 clat (msec): min=25, max=10219, avg=4224.05, stdev=3604.86 00:21:43.143 lat (msec): min=676, max=10259, avg=4259.98, stdev=3603.34 00:21:43.143 clat percentiles (msec): 00:21:43.143 | 1.00th=[ 676], 5.00th=[ 684], 10.00th=[ 743], 20.00th=[ 827], 00:21:43.143 | 30.00th=[ 936], 40.00th=[ 1036], 50.00th=[ 2106], 60.00th=[ 6745], 00:21:43.143 | 70.00th=[ 8020], 80.00th=[ 8423], 90.00th=[ 8658], 95.00th=[ 8792], 00:21:43.143 | 99.00th=[10134], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:43.143 | 99.99th=[10268] 00:21:43.143 bw ( KiB/s): min= 2048, max=108327, per=1.17%, avg=53545.67, stdev=47483.78, samples=6 00:21:43.143 iops : min= 2, max= 105, avg=52.00, stdev=46.32, samples=6 00:21:43.143 lat (msec) : 50=0.35%, 750=10.53%, 1000=24.56%, 2000=14.04%, >=2000=50.53% 00:21:43.143 cpu : usr=0.03%, sys=0.64%, ctx=422, majf=0, minf=32769 00:21:43.143 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=77.9% 00:21:43.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.143 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:21:43.143 issued rwts: total=285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.143 job0: (groupid=0, jobs=1): err= 0: pid=2983682: Mon Jul 15 10:29:18 2024 00:21:43.143 read: IOPS=44, BW=44.7MiB/s (46.9MB/s)(462MiB/10325msec) 00:21:43.143 slat (usec): min=23, max=2091.9k, avg=22268.55, stdev=154216.40 00:21:43.143 clat (msec): min=33, max=5571, avg=2325.19, stdev=1678.02 00:21:43.143 lat (msec): min=954, max=5573, avg=2347.46, stdev=1676.71 00:21:43.143 clat percentiles (msec): 00:21:43.143 | 1.00th=[ 953], 5.00th=[ 961], 10.00th=[ 961], 20.00th=[ 1099], 00:21:43.143 | 30.00th=[ 1217], 40.00th=[ 1318], 50.00th=[ 1351], 60.00th=[ 1418], 00:21:43.143 | 70.00th=[ 2802], 80.00th=[ 4665], 90.00th=[ 5269], 95.00th=[ 5336], 00:21:43.143 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5604], 99.95th=[ 5604], 00:21:43.143 | 99.99th=[ 5604] 00:21:43.143 bw ( KiB/s): min= 4096, max=141312, per=1.66%, avg=75989.78, stdev=48877.11, samples=9 00:21:43.143 iops : min= 4, max= 138, avg=74.00, stdev=47.96, samples=9 00:21:43.143 lat (msec) : 50=0.22%, 1000=13.64%, 2000=52.81%, >=2000=33.33% 00:21:43.143 cpu : usr=0.04%, sys=1.18%, ctx=603, majf=0, minf=32769 00:21:43.143 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=6.9%, >=64=86.4% 00:21:43.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.143 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.143 issued rwts: total=462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.143 job0: (groupid=0, jobs=1): err= 0: pid=2983683: Mon Jul 15 10:29:18 2024 00:21:43.143 read: IOPS=2, BW=2705KiB/s (2770kB/s)(27.0MiB/10220msec) 00:21:43.143 slat (usec): min=650, max=4240.4k, avg=376981.06, stdev=983136.49 00:21:43.143 clat (msec): min=40, max=10214, avg=8795.41, stdev=2432.20 00:21:43.143 lat (msec): min=4280, max=10219, avg=9172.40, stdev=1702.33 00:21:43.143 clat percentiles (msec): 00:21:43.143 | 1.00th=[ 41], 5.00th=[ 4279], 10.00th=[ 6342], 20.00th=[ 6477], 00:21:43.143 | 30.00th=[ 8557], 40.00th=[10134], 50.00th=[10268], 60.00th=[10268], 00:21:43.143 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:21:43.143 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:43.143 | 99.99th=[10268] 00:21:43.143 lat (msec) : 50=3.70%, >=2000=96.30% 00:21:43.143 cpu : usr=0.01%, sys=0.16%, ctx=70, majf=0, minf=6913 00:21:43.143 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:21:43.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.143 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:21:43.143 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.143 job0: (groupid=0, jobs=1): err= 0: pid=2983684: Mon Jul 15 10:29:18 2024 00:21:43.143 read: IOPS=89, BW=89.1MiB/s (93.5MB/s)(897MiB/10063msec) 00:21:43.143 slat (usec): min=32, max=2093.0k, avg=11158.83, stdev=111077.05 00:21:43.143 clat (msec): min=48, max=6709, avg=984.57, stdev=1583.84 00:21:43.143 lat (msec): min=67, max=6716, avg=995.73, stdev=1597.74 00:21:43.143 clat percentiles (msec): 00:21:43.143 | 1.00th=[ 75], 5.00th=[ 222], 10.00th=[ 409], 20.00th=[ 514], 00:21:43.143 | 30.00th=[ 523], 40.00th=[ 531], 50.00th=[ 542], 60.00th=[ 558], 00:21:43.143 | 70.00th=[ 609], 80.00th=[ 625], 90.00th=[ 718], 95.00th=[ 6611], 00:21:43.143 | 99.00th=[ 6678], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:21:43.143 | 99.99th=[ 6678] 00:21:43.143 bw ( KiB/s): min=169984, max=253952, per=4.91%, avg=224987.43, stdev=32261.22, samples=7 00:21:43.143 iops : min= 166, max= 248, avg=219.71, stdev=31.51, samples=7 00:21:43.143 lat (msec) : 50=0.11%, 100=1.56%, 250=3.57%, 500=7.69%, 750=77.59% 00:21:43.143 lat (msec) : 1000=1.45%, >=2000=8.03% 00:21:43.143 cpu : usr=0.03%, sys=1.68%, ctx=970, majf=0, minf=32769 00:21:43.143 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=93.0% 00:21:43.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.143 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.143 issued rwts: total=897,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.143 job0: (groupid=0, jobs=1): err= 0: pid=2983685: Mon Jul 15 10:29:18 2024 00:21:43.143 read: IOPS=2, BW=2593KiB/s (2655kB/s)(26.0MiB/10269msec) 00:21:43.143 slat (usec): min=1473, max=2113.6k, avg=393389.29, stdev=796955.22 00:21:43.143 clat (msec): min=40, max=10250, avg=7815.04, stdev=3128.79 00:21:43.143 lat (msec): min=2119, max=10268, avg=8208.43, stdev=2729.72 00:21:43.143 clat percentiles (msec): 00:21:43.143 | 1.00th=[ 41], 5.00th=[ 2123], 10.00th=[ 2140], 20.00th=[ 4329], 00:21:43.143 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10134], 00:21:43.143 | 70.00th=[10134], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:21:43.143 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:43.143 | 99.99th=[10268] 00:21:43.143 lat (msec) : 50=3.85%, >=2000=96.15% 00:21:43.143 cpu : usr=0.00%, sys=0.11%, ctx=95, majf=0, minf=6657 00:21:43.143 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:21:43.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.143 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:21:43.143 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.143 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.143 job0: (groupid=0, jobs=1): err= 0: pid=2983686: Mon Jul 15 10:29:18 2024 00:21:43.143 read: IOPS=6, BW=6841KiB/s (7005kB/s)(70.0MiB/10478msec) 00:21:43.144 slat (usec): min=968, max=2102.1k, avg=149061.64, stdev=513416.38 00:21:43.144 clat (msec): min=43, max=10473, avg=9135.88, stdev=2372.83 00:21:43.144 lat (msec): min=2145, max=10477, avg=9284.94, stdev=2106.10 00:21:43.144 clat percentiles (msec): 00:21:43.144 | 1.00th=[ 44], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6409], 00:21:43.144 | 30.00th=[10134], 40.00th=[10268], 50.00th=[10268], 60.00th=[10402], 00:21:43.144 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10537], 00:21:43.144 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:43.144 | 99.99th=[10537] 00:21:43.144 lat (msec) : 50=1.43%, >=2000=98.57% 00:21:43.144 cpu : usr=0.00%, sys=0.84%, ctx=141, majf=0, minf=17921 00:21:43.144 IO depths : 1=1.4%, 2=2.9%, 4=5.7%, 8=11.4%, 16=22.9%, 32=45.7%, >=64=10.0% 00:21:43.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.144 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:43.144 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.144 job0: (groupid=0, jobs=1): err= 0: pid=2983687: Mon Jul 15 10:29:18 2024 00:21:43.144 read: IOPS=3, BW=3186KiB/s (3262kB/s)(32.0MiB/10285msec) 00:21:43.144 slat (usec): min=1497, max=2127.0k, avg=320414.93, stdev=728705.97 00:21:43.144 clat (msec): min=31, max=10261, avg=8001.04, stdev=2832.33 00:21:43.144 lat (msec): min=2120, max=10284, avg=8321.45, stdev=2456.71 00:21:43.144 clat percentiles (msec): 00:21:43.144 | 1.00th=[ 32], 5.00th=[ 2123], 10.00th=[ 4245], 20.00th=[ 6409], 00:21:43.144 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10000], 60.00th=[10134], 00:21:43.144 | 70.00th=[10134], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:21:43.144 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:43.144 | 99.99th=[10268] 00:21:43.144 lat (msec) : 50=3.12%, >=2000=96.88% 00:21:43.144 cpu : usr=0.00%, sys=0.20%, ctx=116, majf=0, minf=8193 00:21:43.144 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:21:43.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.144 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:21:43.144 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.144 job0: (groupid=0, jobs=1): err= 0: pid=2983688: Mon Jul 15 10:29:18 2024 00:21:43.144 read: IOPS=32, BW=32.1MiB/s (33.7MB/s)(330MiB/10281msec) 00:21:43.144 slat (usec): min=28, max=2090.9k, avg=31045.15, stdev=199357.32 00:21:43.144 clat (msec): min=33, max=5031, avg=2474.00, stdev=1628.23 00:21:43.144 lat (msec): min=708, max=5033, avg=2505.04, stdev=1628.01 00:21:43.144 clat percentiles (msec): 00:21:43.144 | 1.00th=[ 718], 5.00th=[ 785], 10.00th=[ 852], 20.00th=[ 944], 00:21:43.144 | 30.00th=[ 1045], 40.00th=[ 1167], 50.00th=[ 1250], 60.00th=[ 3708], 00:21:43.144 | 70.00th=[ 4044], 80.00th=[ 4329], 90.00th=[ 4665], 95.00th=[ 4933], 00:21:43.144 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:21:43.144 | 99.99th=[ 5000] 00:21:43.144 bw ( KiB/s): min= 4096, max=165888, per=1.80%, avg=82692.40, stdev=64466.61, samples=5 00:21:43.144 iops : min= 4, max= 162, avg=80.60, stdev=62.86, samples=5 00:21:43.144 lat (msec) : 50=0.30%, 750=0.91%, 1000=23.94%, 2000=29.09%, >=2000=45.76% 00:21:43.144 cpu : usr=0.03%, sys=0.73%, ctx=465, majf=0, minf=32769 00:21:43.144 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.7%, >=64=80.9% 00:21:43.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.144 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:43.144 issued rwts: total=330,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.144 job0: (groupid=0, jobs=1): err= 0: pid=2983689: Mon Jul 15 10:29:18 2024 00:21:43.144 read: IOPS=14, BW=14.1MiB/s (14.7MB/s)(146MiB/10382msec) 00:21:43.144 slat (usec): min=67, max=2163.6k, avg=70835.61, stdev=326997.88 00:21:43.144 clat (msec): min=38, max=10307, avg=7555.02, stdev=2013.75 00:21:43.144 lat (msec): min=2117, max=10313, avg=7625.86, stdev=1926.14 00:21:43.144 clat percentiles (msec): 00:21:43.144 | 1.00th=[ 2123], 5.00th=[ 2937], 10.00th=[ 3910], 20.00th=[ 7416], 00:21:43.144 | 30.00th=[ 7550], 40.00th=[ 7684], 50.00th=[ 7819], 60.00th=[ 8020], 00:21:43.144 | 70.00th=[ 8221], 80.00th=[ 8490], 90.00th=[10134], 95.00th=[10268], 00:21:43.144 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:43.144 | 99.99th=[10268] 00:21:43.144 bw ( KiB/s): min= 2043, max=24576, per=0.16%, avg=7370.80, stdev=9780.43, samples=5 00:21:43.144 iops : min= 1, max= 24, avg= 6.80, stdev= 9.83, samples=5 00:21:43.144 lat (msec) : 50=0.68%, >=2000=99.32% 00:21:43.144 cpu : usr=0.01%, sys=0.79%, ctx=350, majf=0, minf=32769 00:21:43.144 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.5%, 16=11.0%, 32=21.9%, >=64=56.8% 00:21:43.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.144 complete : 0=0.0%, 4=95.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=5.0% 00:21:43.144 issued rwts: total=146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.144 job1: (groupid=0, jobs=1): err= 0: pid=2983690: Mon Jul 15 10:29:18 2024 00:21:43.144 read: IOPS=48, BW=48.3MiB/s (50.7MB/s)(485MiB/10037msec) 00:21:43.144 slat (usec): min=25, max=122964, avg=20617.23, stdev=25159.89 00:21:43.144 clat (msec): min=35, max=4169, avg=2250.52, stdev=1324.01 00:21:43.144 lat (msec): min=37, max=4214, avg=2271.14, stdev=1331.34 00:21:43.144 clat percentiles (msec): 00:21:43.144 | 1.00th=[ 57], 5.00th=[ 114], 10.00th=[ 300], 20.00th=[ 735], 00:21:43.144 | 30.00th=[ 1368], 40.00th=[ 2039], 50.00th=[ 2366], 60.00th=[ 2635], 00:21:43.144 | 70.00th=[ 3171], 80.00th=[ 3708], 90.00th=[ 4077], 95.00th=[ 4111], 00:21:43.144 | 99.00th=[ 4144], 99.50th=[ 4144], 99.90th=[ 4178], 99.95th=[ 4178], 00:21:43.144 | 99.99th=[ 4178] 00:21:43.144 bw ( KiB/s): min=14336, max=108544, per=0.99%, avg=45205.62, stdev=26413.08, samples=13 00:21:43.144 iops : min= 14, max= 106, avg=44.00, stdev=25.76, samples=13 00:21:43.144 lat (msec) : 50=0.62%, 100=3.09%, 250=5.77%, 500=4.95%, 750=5.77% 00:21:43.144 lat (msec) : 1000=6.39%, 2000=12.58%, >=2000=60.82% 00:21:43.144 cpu : usr=0.01%, sys=1.01%, ctx=1782, majf=0, minf=32769 00:21:43.144 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.6%, >=64=87.0% 00:21:43.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.144 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.144 issued rwts: total=485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.144 job1: (groupid=0, jobs=1): err= 0: pid=2983691: Mon Jul 15 10:29:18 2024 00:21:43.144 read: IOPS=53, BW=53.7MiB/s (56.3MB/s)(562MiB/10470msec) 00:21:43.144 slat (usec): min=25, max=1576.4k, avg=18550.67, stdev=69823.96 00:21:43.144 clat (msec): min=42, max=4292, avg=2249.41, stdev=770.87 00:21:43.144 lat (msec): min=1344, max=4315, avg=2267.96, stdev=768.35 00:21:43.144 clat percentiles (msec): 00:21:43.144 | 1.00th=[ 1351], 5.00th=[ 1368], 10.00th=[ 1418], 20.00th=[ 1569], 00:21:43.144 | 30.00th=[ 1636], 40.00th=[ 1770], 50.00th=[ 1955], 60.00th=[ 2232], 00:21:43.144 | 70.00th=[ 2735], 80.00th=[ 3138], 90.00th=[ 3440], 95.00th=[ 3675], 00:21:43.144 | 99.00th=[ 3775], 99.50th=[ 3775], 99.90th=[ 4279], 99.95th=[ 4279], 00:21:43.144 | 99.99th=[ 4279] 00:21:43.144 bw ( KiB/s): min=14336, max=132854, per=1.29%, avg=59230.60, stdev=31212.87, samples=15 00:21:43.144 iops : min= 14, max= 129, avg=57.73, stdev=30.37, samples=15 00:21:43.144 lat (msec) : 50=0.18%, 2000=51.60%, >=2000=48.22% 00:21:43.144 cpu : usr=0.01%, sys=1.37%, ctx=1852, majf=0, minf=32769 00:21:43.144 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.8% 00:21:43.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.144 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.144 issued rwts: total=562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.144 job1: (groupid=0, jobs=1): err= 0: pid=2983692: Mon Jul 15 10:29:18 2024 00:21:43.144 read: IOPS=42, BW=42.8MiB/s (44.9MB/s)(449MiB/10493msec) 00:21:43.144 slat (usec): min=35, max=1575.7k, avg=23271.19, stdev=77248.20 00:21:43.144 clat (msec): min=41, max=4875, avg=2854.10, stdev=841.72 00:21:43.144 lat (msec): min=1254, max=4920, avg=2877.37, stdev=840.58 00:21:43.144 clat percentiles (msec): 00:21:43.144 | 1.00th=[ 1284], 5.00th=[ 1469], 10.00th=[ 1552], 20.00th=[ 1888], 00:21:43.144 | 30.00th=[ 2467], 40.00th=[ 2702], 50.00th=[ 3138], 60.00th=[ 3272], 00:21:43.144 | 70.00th=[ 3339], 80.00th=[ 3440], 90.00th=[ 3641], 95.00th=[ 4144], 00:21:43.144 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:21:43.144 | 99.99th=[ 4866] 00:21:43.144 bw ( KiB/s): min=14336, max=90112, per=0.90%, avg=41082.06, stdev=16926.83, samples=16 00:21:43.144 iops : min= 14, max= 88, avg=40.06, stdev=16.51, samples=16 00:21:43.144 lat (msec) : 50=0.22%, 2000=21.16%, >=2000=78.62% 00:21:43.144 cpu : usr=0.03%, sys=1.43%, ctx=1593, majf=0, minf=32769 00:21:43.144 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.1%, >=64=86.0% 00:21:43.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.144 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.144 issued rwts: total=449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.144 job1: (groupid=0, jobs=1): err= 0: pid=2983693: Mon Jul 15 10:29:18 2024 00:21:43.144 read: IOPS=41, BW=41.7MiB/s (43.7MB/s)(426MiB/10222msec) 00:21:43.144 slat (usec): min=23, max=2127.1k, avg=23881.85, stdev=176337.70 00:21:43.144 clat (msec): min=45, max=8793, avg=2923.37, stdev=2913.65 00:21:43.144 lat (msec): min=531, max=8796, avg=2947.25, stdev=2922.52 00:21:43.144 clat percentiles (msec): 00:21:43.144 | 1.00th=[ 531], 5.00th=[ 531], 10.00th=[ 535], 20.00th=[ 542], 00:21:43.144 | 30.00th=[ 542], 40.00th=[ 550], 50.00th=[ 550], 60.00th=[ 3138], 00:21:43.144 | 70.00th=[ 4329], 80.00th=[ 5671], 90.00th=[ 8658], 95.00th=[ 8658], 00:21:43.144 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:21:43.144 | 99.99th=[ 8792] 00:21:43.144 bw ( KiB/s): min=10240, max=239616, per=1.48%, avg=67811.56, stdev=77611.12, samples=9 00:21:43.144 iops : min= 10, max= 234, avg=66.22, stdev=75.79, samples=9 00:21:43.144 lat (msec) : 50=0.23%, 750=52.11%, >=2000=47.65% 00:21:43.144 cpu : usr=0.03%, sys=0.87%, ctx=584, majf=0, minf=32769 00:21:43.144 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.5%, >=64=85.2% 00:21:43.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.144 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.144 issued rwts: total=426,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.144 job1: (groupid=0, jobs=1): err= 0: pid=2983694: Mon Jul 15 10:29:18 2024 00:21:43.144 read: IOPS=71, BW=71.3MiB/s (74.8MB/s)(723MiB/10137msec) 00:21:43.144 slat (usec): min=21, max=345252, avg=13877.97, stdev=23614.16 00:21:43.144 clat (msec): min=98, max=3804, avg=1648.28, stdev=947.22 00:21:43.144 lat (msec): min=143, max=3818, avg=1662.15, stdev=951.54 00:21:43.144 clat percentiles (msec): 00:21:43.145 | 1.00th=[ 194], 5.00th=[ 542], 10.00th=[ 735], 20.00th=[ 785], 00:21:43.145 | 30.00th=[ 995], 40.00th=[ 1301], 50.00th=[ 1469], 60.00th=[ 1603], 00:21:43.145 | 70.00th=[ 1821], 80.00th=[ 2366], 90.00th=[ 3373], 95.00th=[ 3608], 00:21:43.145 | 99.00th=[ 3775], 99.50th=[ 3809], 99.90th=[ 3809], 99.95th=[ 3809], 00:21:43.145 | 99.99th=[ 3809] 00:21:43.145 bw ( KiB/s): min=12288, max=172032, per=1.66%, avg=75970.94, stdev=50667.59, samples=16 00:21:43.145 iops : min= 12, max= 168, avg=74.06, stdev=49.53, samples=16 00:21:43.145 lat (msec) : 100=0.14%, 250=1.66%, 500=2.77%, 750=8.44%, 1000=17.15% 00:21:43.145 lat (msec) : 2000=45.37%, >=2000=24.48% 00:21:43.145 cpu : usr=0.05%, sys=1.66%, ctx=1926, majf=0, minf=32769 00:21:43.145 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:21:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.145 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.145 issued rwts: total=723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.145 job1: (groupid=0, jobs=1): err= 0: pid=2983695: Mon Jul 15 10:29:18 2024 00:21:43.145 read: IOPS=73, BW=73.7MiB/s (77.2MB/s)(746MiB/10126msec) 00:21:43.145 slat (usec): min=27, max=2113.7k, avg=13429.99, stdev=78893.86 00:21:43.145 clat (msec): min=101, max=5993, avg=1635.45, stdev=1023.12 00:21:43.145 lat (msec): min=156, max=6003, avg=1648.88, stdev=1031.47 00:21:43.145 clat percentiles (msec): 00:21:43.145 | 1.00th=[ 194], 5.00th=[ 567], 10.00th=[ 609], 20.00th=[ 642], 00:21:43.145 | 30.00th=[ 735], 40.00th=[ 1536], 50.00th=[ 1720], 60.00th=[ 1737], 00:21:43.145 | 70.00th=[ 1787], 80.00th=[ 1854], 90.00th=[ 3406], 95.00th=[ 3675], 00:21:43.145 | 99.00th=[ 3910], 99.50th=[ 5940], 99.90th=[ 6007], 99.95th=[ 6007], 00:21:43.145 | 99.99th=[ 6007] 00:21:43.145 bw ( KiB/s): min=10240, max=217088, per=1.72%, avg=78900.88, stdev=43093.73, samples=16 00:21:43.145 iops : min= 10, max= 212, avg=77.00, stdev=42.08, samples=16 00:21:43.145 lat (msec) : 250=1.34%, 500=3.35%, 750=25.34%, 1000=5.09%, 2000=47.32% 00:21:43.145 lat (msec) : >=2000=17.56% 00:21:43.145 cpu : usr=0.02%, sys=1.43%, ctx=1193, majf=0, minf=32769 00:21:43.145 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:21:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.145 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.145 issued rwts: total=746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.145 job1: (groupid=0, jobs=1): err= 0: pid=2983696: Mon Jul 15 10:29:18 2024 00:21:43.145 read: IOPS=104, BW=105MiB/s (110MB/s)(1053MiB/10072msec) 00:21:43.145 slat (usec): min=32, max=107530, avg=9491.94, stdev=16655.26 00:21:43.145 clat (msec): min=68, max=2089, avg=1169.81, stdev=451.96 00:21:43.145 lat (msec): min=122, max=2093, avg=1179.30, stdev=454.30 00:21:43.145 clat percentiles (msec): 00:21:43.145 | 1.00th=[ 157], 5.00th=[ 676], 10.00th=[ 684], 20.00th=[ 743], 00:21:43.145 | 30.00th=[ 944], 40.00th=[ 961], 50.00th=[ 995], 60.00th=[ 1150], 00:21:43.145 | 70.00th=[ 1569], 80.00th=[ 1620], 90.00th=[ 1804], 95.00th=[ 1955], 00:21:43.145 | 99.00th=[ 2056], 99.50th=[ 2072], 99.90th=[ 2089], 99.95th=[ 2089], 00:21:43.145 | 99.99th=[ 2089] 00:21:43.145 bw ( KiB/s): min=59392, max=182272, per=2.17%, avg=99636.47, stdev=34767.18, samples=19 00:21:43.145 iops : min= 58, max= 178, avg=97.26, stdev=33.90, samples=19 00:21:43.145 lat (msec) : 100=0.09%, 250=1.80%, 500=1.33%, 750=18.71%, 1000=28.68% 00:21:43.145 lat (msec) : 2000=46.53%, >=2000=2.85% 00:21:43.145 cpu : usr=0.11%, sys=1.70%, ctx=1358, majf=0, minf=32769 00:21:43.145 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:21:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.145 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.145 issued rwts: total=1053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.145 job1: (groupid=0, jobs=1): err= 0: pid=2983697: Mon Jul 15 10:29:18 2024 00:21:43.145 read: IOPS=42, BW=42.0MiB/s (44.0MB/s)(435MiB/10355msec) 00:21:43.145 slat (usec): min=28, max=1978.3k, avg=23695.41, stdev=97083.60 00:21:43.145 clat (msec): min=45, max=4846, avg=2670.22, stdev=707.28 00:21:43.145 lat (msec): min=1813, max=4907, avg=2693.92, stdev=698.16 00:21:43.145 clat percentiles (msec): 00:21:43.145 | 1.00th=[ 1821], 5.00th=[ 1989], 10.00th=[ 2056], 20.00th=[ 2165], 00:21:43.145 | 30.00th=[ 2232], 40.00th=[ 2299], 50.00th=[ 2433], 60.00th=[ 2567], 00:21:43.145 | 70.00th=[ 2769], 80.00th=[ 3171], 90.00th=[ 3742], 95.00th=[ 4396], 00:21:43.145 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4866], 99.95th=[ 4866], 00:21:43.145 | 99.99th=[ 4866] 00:21:43.145 bw ( KiB/s): min=14336, max=98304, per=1.14%, avg=52394.67, stdev=27190.25, samples=12 00:21:43.145 iops : min= 14, max= 96, avg=51.17, stdev=26.55, samples=12 00:21:43.145 lat (msec) : 50=0.23%, 2000=5.98%, >=2000=93.79% 00:21:43.145 cpu : usr=0.00%, sys=0.90%, ctx=1498, majf=0, minf=32769 00:21:43.145 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:21:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.145 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.145 issued rwts: total=435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.145 job1: (groupid=0, jobs=1): err= 0: pid=2983698: Mon Jul 15 10:29:18 2024 00:21:43.145 read: IOPS=54, BW=54.1MiB/s (56.8MB/s)(545MiB/10068msec) 00:21:43.145 slat (usec): min=25, max=138859, avg=18356.61, stdev=23229.56 00:21:43.145 clat (msec): min=61, max=3148, avg=2180.73, stdev=826.53 00:21:43.145 lat (msec): min=70, max=3222, avg=2199.09, stdev=829.90 00:21:43.145 clat percentiles (msec): 00:21:43.145 | 1.00th=[ 86], 5.00th=[ 330], 10.00th=[ 709], 20.00th=[ 1485], 00:21:43.145 | 30.00th=[ 1989], 40.00th=[ 2366], 50.00th=[ 2500], 60.00th=[ 2635], 00:21:43.145 | 70.00th=[ 2735], 80.00th=[ 2836], 90.00th=[ 2937], 95.00th=[ 3004], 00:21:43.145 | 99.00th=[ 3071], 99.50th=[ 3138], 99.90th=[ 3138], 99.95th=[ 3138], 00:21:43.145 | 99.99th=[ 3138] 00:21:43.145 bw ( KiB/s): min=18432, max=91976, per=1.10%, avg=50336.71, stdev=21162.51, samples=17 00:21:43.145 iops : min= 18, max= 89, avg=49.06, stdev=20.50, samples=17 00:21:43.145 lat (msec) : 100=1.28%, 250=2.02%, 500=3.49%, 750=3.67%, 1000=4.22% 00:21:43.145 lat (msec) : 2000=15.78%, >=2000=69.54% 00:21:43.145 cpu : usr=0.02%, sys=1.24%, ctx=1801, majf=0, minf=32769 00:21:43.145 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.4% 00:21:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.145 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.145 issued rwts: total=545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.145 job1: (groupid=0, jobs=1): err= 0: pid=2983699: Mon Jul 15 10:29:18 2024 00:21:43.145 read: IOPS=16, BW=16.4MiB/s (17.1MB/s)(170MiB/10394msec) 00:21:43.145 slat (usec): min=301, max=2124.9k, avg=60905.37, stdev=315757.17 00:21:43.145 clat (msec): min=39, max=9657, avg=7099.12, stdev=3077.35 00:21:43.145 lat (msec): min=1327, max=9678, avg=7160.03, stdev=3028.73 00:21:43.145 clat percentiles (msec): 00:21:43.145 | 1.00th=[ 1334], 5.00th=[ 1368], 10.00th=[ 1552], 20.00th=[ 3406], 00:21:43.145 | 30.00th=[ 6409], 40.00th=[ 8658], 50.00th=[ 8792], 60.00th=[ 9060], 00:21:43.145 | 70.00th=[ 9194], 80.00th=[ 9329], 90.00th=[ 9463], 95.00th=[ 9597], 00:21:43.145 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:21:43.145 | 99.99th=[ 9597] 00:21:43.145 bw ( KiB/s): min= 4096, max=51200, per=0.38%, avg=17200.60, stdev=19355.19, samples=5 00:21:43.145 iops : min= 4, max= 50, avg=16.60, stdev=19.05, samples=5 00:21:43.145 lat (msec) : 50=0.59%, 2000=14.71%, >=2000=84.71% 00:21:43.145 cpu : usr=0.00%, sys=0.73%, ctx=455, majf=0, minf=32769 00:21:43.145 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.7%, 16=9.4%, 32=18.8%, >=64=62.9% 00:21:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.145 complete : 0=0.0%, 4=97.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.3% 00:21:43.145 issued rwts: total=170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.145 job1: (groupid=0, jobs=1): err= 0: pid=2983700: Mon Jul 15 10:29:18 2024 00:21:43.145 read: IOPS=58, BW=58.9MiB/s (61.7MB/s)(593MiB/10070msec) 00:21:43.145 slat (usec): min=25, max=184595, avg=16875.04, stdev=22094.80 00:21:43.145 clat (msec): min=60, max=3098, avg=1977.75, stdev=639.49 00:21:43.145 lat (msec): min=83, max=3102, avg=1994.63, stdev=639.62 00:21:43.145 clat percentiles (msec): 00:21:43.145 | 1.00th=[ 197], 5.00th=[ 550], 10.00th=[ 978], 20.00th=[ 1636], 00:21:43.145 | 30.00th=[ 1838], 40.00th=[ 1955], 50.00th=[ 2089], 60.00th=[ 2165], 00:21:43.145 | 70.00th=[ 2265], 80.00th=[ 2500], 90.00th=[ 2668], 95.00th=[ 2836], 00:21:43.145 | 99.00th=[ 3071], 99.50th=[ 3104], 99.90th=[ 3104], 99.95th=[ 3104], 00:21:43.145 | 99.99th=[ 3104] 00:21:43.145 bw ( KiB/s): min=20480, max=118784, per=1.30%, avg=59658.00, stdev=29180.94, samples=16 00:21:43.145 iops : min= 20, max= 116, avg=58.25, stdev=28.49, samples=16 00:21:43.145 lat (msec) : 100=0.34%, 250=1.35%, 500=2.53%, 750=4.05%, 1000=2.36% 00:21:43.145 lat (msec) : 2000=31.70%, >=2000=57.67% 00:21:43.145 cpu : usr=0.01%, sys=1.33%, ctx=1989, majf=0, minf=32769 00:21:43.145 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:21:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.145 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.145 issued rwts: total=593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.145 job1: (groupid=0, jobs=1): err= 0: pid=2983701: Mon Jul 15 10:29:18 2024 00:21:43.145 read: IOPS=26, BW=27.0MiB/s (28.3MB/s)(276MiB/10239msec) 00:21:43.145 slat (usec): min=27, max=2120.8k, avg=36235.60, stdev=215936.34 00:21:43.145 clat (msec): min=236, max=8235, avg=4340.87, stdev=3256.54 00:21:43.145 lat (msec): min=264, max=8249, avg=4377.11, stdev=3260.99 00:21:43.145 clat percentiles (msec): 00:21:43.145 | 1.00th=[ 266], 5.00th=[ 338], 10.00th=[ 472], 20.00th=[ 927], 00:21:43.145 | 30.00th=[ 1301], 40.00th=[ 1905], 50.00th=[ 3742], 60.00th=[ 6074], 00:21:43.145 | 70.00th=[ 8020], 80.00th=[ 8087], 90.00th=[ 8154], 95.00th=[ 8154], 00:21:43.145 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:21:43.145 | 99.99th=[ 8221] 00:21:43.145 bw ( KiB/s): min= 2048, max=81996, per=0.81%, avg=37361.38, stdev=28524.46, samples=8 00:21:43.145 iops : min= 2, max= 80, avg=36.12, stdev=27.94, samples=8 00:21:43.145 lat (msec) : 250=0.36%, 500=10.51%, 750=6.16%, 1000=5.43%, 2000=20.29% 00:21:43.145 lat (msec) : >=2000=57.25% 00:21:43.145 cpu : usr=0.01%, sys=0.87%, ctx=766, majf=0, minf=32769 00:21:43.145 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.6%, >=64=77.2% 00:21:43.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.145 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:21:43.145 issued rwts: total=276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.145 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.145 job1: (groupid=0, jobs=1): err= 0: pid=2983702: Mon Jul 15 10:29:18 2024 00:21:43.146 read: IOPS=37, BW=37.3MiB/s (39.1MB/s)(391MiB/10479msec) 00:21:43.146 slat (usec): min=25, max=1929.4k, avg=26688.51, stdev=99660.13 00:21:43.146 clat (msec): min=42, max=5078, avg=3096.02, stdev=585.65 00:21:43.146 lat (msec): min=1971, max=5100, avg=3122.71, stdev=572.43 00:21:43.146 clat percentiles (msec): 00:21:43.146 | 1.00th=[ 1972], 5.00th=[ 2165], 10.00th=[ 2433], 20.00th=[ 2735], 00:21:43.146 | 30.00th=[ 2869], 40.00th=[ 3037], 50.00th=[ 3171], 60.00th=[ 3205], 00:21:43.146 | 70.00th=[ 3239], 80.00th=[ 3272], 90.00th=[ 3540], 95.00th=[ 4530], 00:21:43.146 | 99.00th=[ 5000], 99.50th=[ 5067], 99.90th=[ 5067], 99.95th=[ 5067], 00:21:43.146 | 99.99th=[ 5067] 00:21:43.146 bw ( KiB/s): min= 6144, max=75776, per=0.90%, avg=41432.62, stdev=21004.96, samples=13 00:21:43.146 iops : min= 6, max= 74, avg=40.46, stdev=20.51, samples=13 00:21:43.146 lat (msec) : 50=0.26%, 2000=1.79%, >=2000=97.95% 00:21:43.146 cpu : usr=0.02%, sys=0.99%, ctx=1437, majf=0, minf=32769 00:21:43.146 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.2%, >=64=83.9% 00:21:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.146 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:43.146 issued rwts: total=391,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.146 job2: (groupid=0, jobs=1): err= 0: pid=2983703: Mon Jul 15 10:29:18 2024 00:21:43.146 read: IOPS=80, BW=80.4MiB/s (84.3MB/s)(809MiB/10059msec) 00:21:43.146 slat (usec): min=24, max=94099, avg=12380.06, stdev=17094.74 00:21:43.146 clat (msec): min=39, max=2705, avg=1469.88, stdev=617.55 00:21:43.146 lat (msec): min=91, max=2756, avg=1482.26, stdev=618.00 00:21:43.146 clat percentiles (msec): 00:21:43.146 | 1.00th=[ 230], 5.00th=[ 510], 10.00th=[ 531], 20.00th=[ 877], 00:21:43.146 | 30.00th=[ 1234], 40.00th=[ 1284], 50.00th=[ 1418], 60.00th=[ 1636], 00:21:43.146 | 70.00th=[ 1838], 80.00th=[ 2056], 90.00th=[ 2366], 95.00th=[ 2534], 00:21:43.146 | 99.00th=[ 2635], 99.50th=[ 2702], 99.90th=[ 2702], 99.95th=[ 2702], 00:21:43.146 | 99.99th=[ 2702] 00:21:43.146 bw ( KiB/s): min=20480, max=225280, per=1.79%, avg=82047.18, stdev=57711.71, samples=17 00:21:43.146 iops : min= 20, max= 220, avg=79.88, stdev=56.51, samples=17 00:21:43.146 lat (msec) : 50=0.12%, 100=0.25%, 250=0.87%, 500=2.22%, 750=13.60% 00:21:43.146 lat (msec) : 1000=4.33%, 2000=56.86%, >=2000=21.76% 00:21:43.146 cpu : usr=0.02%, sys=1.65%, ctx=2237, majf=0, minf=32769 00:21:43.146 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:21:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.146 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.146 issued rwts: total=809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.146 job2: (groupid=0, jobs=1): err= 0: pid=2983704: Mon Jul 15 10:29:18 2024 00:21:43.146 read: IOPS=45, BW=46.0MiB/s (48.2MB/s)(464MiB/10089msec) 00:21:43.146 slat (usec): min=25, max=123620, avg=21562.99, stdev=29281.68 00:21:43.146 clat (msec): min=81, max=3833, avg=2457.88, stdev=940.48 00:21:43.146 lat (msec): min=106, max=3840, avg=2479.45, stdev=938.44 00:21:43.146 clat percentiles (msec): 00:21:43.146 | 1.00th=[ 165], 5.00th=[ 760], 10.00th=[ 1250], 20.00th=[ 1536], 00:21:43.146 | 30.00th=[ 1804], 40.00th=[ 2198], 50.00th=[ 2635], 60.00th=[ 3037], 00:21:43.146 | 70.00th=[ 3171], 80.00th=[ 3406], 90.00th=[ 3507], 95.00th=[ 3675], 00:21:43.146 | 99.00th=[ 3809], 99.50th=[ 3809], 99.90th=[ 3842], 99.95th=[ 3842], 00:21:43.146 | 99.99th=[ 3842] 00:21:43.146 bw ( KiB/s): min=16384, max=196608, per=1.00%, avg=45795.27, stdev=43689.28, samples=15 00:21:43.146 iops : min= 16, max= 192, avg=44.60, stdev=42.71, samples=15 00:21:43.146 lat (msec) : 100=0.22%, 250=1.08%, 500=2.37%, 750=1.29%, 1000=1.08% 00:21:43.146 lat (msec) : 2000=30.60%, >=2000=63.36% 00:21:43.146 cpu : usr=0.03%, sys=0.81%, ctx=1622, majf=0, minf=32769 00:21:43.146 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.4% 00:21:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.146 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.146 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.146 job2: (groupid=0, jobs=1): err= 0: pid=2983705: Mon Jul 15 10:29:18 2024 00:21:43.146 read: IOPS=103, BW=104MiB/s (109MB/s)(1042MiB/10042msec) 00:21:43.146 slat (usec): min=23, max=199741, avg=9604.55, stdev=19768.66 00:21:43.146 clat (msec): min=28, max=3951, avg=1156.13, stdev=874.89 00:21:43.146 lat (msec): min=41, max=3958, avg=1165.73, stdev=879.17 00:21:43.146 clat percentiles (msec): 00:21:43.146 | 1.00th=[ 86], 5.00th=[ 439], 10.00th=[ 542], 20.00th=[ 600], 00:21:43.146 | 30.00th=[ 634], 40.00th=[ 760], 50.00th=[ 835], 60.00th=[ 911], 00:21:43.146 | 70.00th=[ 995], 80.00th=[ 1536], 90.00th=[ 2735], 95.00th=[ 3306], 00:21:43.146 | 99.00th=[ 3842], 99.50th=[ 3910], 99.90th=[ 3943], 99.95th=[ 3943], 00:21:43.146 | 99.99th=[ 3943] 00:21:43.146 bw ( KiB/s): min=22528, max=299008, per=2.22%, avg=101791.56, stdev=78393.11, samples=18 00:21:43.146 iops : min= 22, max= 292, avg=99.28, stdev=76.54, samples=18 00:21:43.146 lat (msec) : 50=0.58%, 100=0.48%, 250=0.38%, 500=6.05%, 750=31.77% 00:21:43.146 lat (msec) : 1000=31.00%, 2000=14.01%, >=2000=15.74% 00:21:43.146 cpu : usr=0.05%, sys=1.62%, ctx=2280, majf=0, minf=32769 00:21:43.146 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:21:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.146 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.146 issued rwts: total=1042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.146 job2: (groupid=0, jobs=1): err= 0: pid=2983706: Mon Jul 15 10:29:18 2024 00:21:43.146 read: IOPS=33, BW=33.8MiB/s (35.5MB/s)(341MiB/10076msec) 00:21:43.146 slat (usec): min=33, max=275645, avg=29359.70, stdev=43861.78 00:21:43.146 clat (msec): min=62, max=6578, avg=3526.37, stdev=1969.87 00:21:43.146 lat (msec): min=83, max=6596, avg=3555.73, stdev=1975.07 00:21:43.146 clat percentiles (msec): 00:21:43.146 | 1.00th=[ 136], 5.00th=[ 518], 10.00th=[ 995], 20.00th=[ 1502], 00:21:43.146 | 30.00th=[ 1888], 40.00th=[ 2567], 50.00th=[ 3574], 60.00th=[ 4396], 00:21:43.146 | 70.00th=[ 4933], 80.00th=[ 5604], 90.00th=[ 6275], 95.00th=[ 6477], 00:21:43.146 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6611], 99.95th=[ 6611], 00:21:43.146 | 99.99th=[ 6611] 00:21:43.146 bw ( KiB/s): min= 6144, max=40960, per=0.53%, avg=24250.89, stdev=10768.39, samples=18 00:21:43.146 iops : min= 6, max= 40, avg=23.67, stdev=10.50, samples=18 00:21:43.146 lat (msec) : 100=0.59%, 250=0.88%, 500=3.52%, 750=1.17%, 1000=4.11% 00:21:43.146 lat (msec) : 2000=21.70%, >=2000=68.04% 00:21:43.146 cpu : usr=0.02%, sys=1.28%, ctx=1589, majf=0, minf=32769 00:21:43.146 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.3%, 16=4.7%, 32=9.4%, >=64=81.5% 00:21:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.146 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:21:43.146 issued rwts: total=341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.146 job2: (groupid=0, jobs=1): err= 0: pid=2983707: Mon Jul 15 10:29:18 2024 00:21:43.146 read: IOPS=41, BW=41.9MiB/s (44.0MB/s)(421MiB/10042msec) 00:21:43.146 slat (usec): min=65, max=219482, avg=23788.59, stdev=34966.84 00:21:43.146 clat (msec): min=25, max=4887, avg=2752.07, stdev=1244.75 00:21:43.146 lat (msec): min=78, max=4908, avg=2775.85, stdev=1246.96 00:21:43.146 clat percentiles (msec): 00:21:43.146 | 1.00th=[ 113], 5.00th=[ 388], 10.00th=[ 1003], 20.00th=[ 1653], 00:21:43.146 | 30.00th=[ 2056], 40.00th=[ 2500], 50.00th=[ 2769], 60.00th=[ 3104], 00:21:43.146 | 70.00th=[ 3507], 80.00th=[ 3943], 90.00th=[ 4463], 95.00th=[ 4732], 00:21:43.146 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:21:43.146 | 99.99th=[ 4866] 00:21:43.146 bw ( KiB/s): min=14307, max=69632, per=0.75%, avg=34169.81, stdev=15139.43, samples=16 00:21:43.146 iops : min= 13, max= 68, avg=33.25, stdev=14.87, samples=16 00:21:43.146 lat (msec) : 50=0.24%, 100=0.71%, 250=2.61%, 500=2.38%, 750=2.85% 00:21:43.146 lat (msec) : 1000=1.19%, 2000=16.63%, >=2000=73.40% 00:21:43.146 cpu : usr=0.03%, sys=0.75%, ctx=1681, majf=0, minf=32769 00:21:43.146 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.0% 00:21:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.146 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.146 issued rwts: total=421,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.146 job2: (groupid=0, jobs=1): err= 0: pid=2983708: Mon Jul 15 10:29:18 2024 00:21:43.146 read: IOPS=39, BW=39.3MiB/s (41.2MB/s)(395MiB/10059msec) 00:21:43.146 slat (usec): min=27, max=227570, avg=25397.13, stdev=29018.77 00:21:43.146 clat (msec): min=25, max=4339, avg=2750.57, stdev=796.68 00:21:43.146 lat (msec): min=69, max=4346, avg=2775.97, stdev=791.57 00:21:43.146 clat percentiles (msec): 00:21:43.146 | 1.00th=[ 79], 5.00th=[ 1234], 10.00th=[ 1770], 20.00th=[ 2366], 00:21:43.146 | 30.00th=[ 2467], 40.00th=[ 2769], 50.00th=[ 2937], 60.00th=[ 3004], 00:21:43.146 | 70.00th=[ 3071], 80.00th=[ 3239], 90.00th=[ 3473], 95.00th=[ 3977], 00:21:43.146 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4329], 99.95th=[ 4329], 00:21:43.146 | 99.99th=[ 4329] 00:21:43.146 bw ( KiB/s): min= 8192, max=71680, per=0.85%, avg=39042.50, stdev=17652.65, samples=14 00:21:43.146 iops : min= 8, max= 70, avg=38.00, stdev=17.29, samples=14 00:21:43.146 lat (msec) : 50=0.25%, 100=1.27%, 250=1.01%, 500=0.25%, 750=0.76% 00:21:43.146 lat (msec) : 1000=0.51%, 2000=8.35%, >=2000=87.59% 00:21:43.146 cpu : usr=0.00%, sys=0.79%, ctx=1924, majf=0, minf=32769 00:21:43.146 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.1% 00:21:43.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.146 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:43.146 issued rwts: total=395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.146 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.146 job2: (groupid=0, jobs=1): err= 0: pid=2983709: Mon Jul 15 10:29:18 2024 00:21:43.146 read: IOPS=46, BW=46.6MiB/s (48.9MB/s)(473MiB/10143msec) 00:21:43.146 slat (usec): min=26, max=164741, avg=21223.32, stdev=27077.27 00:21:43.146 clat (msec): min=101, max=3569, avg=2558.80, stdev=826.07 00:21:43.146 lat (msec): min=149, max=3621, avg=2580.03, stdev=825.70 00:21:43.146 clat percentiles (msec): 00:21:43.146 | 1.00th=[ 226], 5.00th=[ 676], 10.00th=[ 986], 20.00th=[ 2123], 00:21:43.146 | 30.00th=[ 2534], 40.00th=[ 2635], 50.00th=[ 2735], 60.00th=[ 2869], 00:21:43.146 | 70.00th=[ 3071], 80.00th=[ 3205], 90.00th=[ 3373], 95.00th=[ 3406], 00:21:43.146 | 99.00th=[ 3507], 99.50th=[ 3540], 99.90th=[ 3574], 99.95th=[ 3574], 00:21:43.146 | 99.99th=[ 3574] 00:21:43.146 bw ( KiB/s): min=20480, max=65536, per=0.90%, avg=41411.41, stdev=14960.57, samples=17 00:21:43.146 iops : min= 20, max= 64, avg=40.35, stdev=14.64, samples=17 00:21:43.146 lat (msec) : 250=1.48%, 500=1.48%, 750=3.59%, 1000=3.81%, 2000=8.88% 00:21:43.146 lat (msec) : >=2000=80.76% 00:21:43.147 cpu : usr=0.01%, sys=1.29%, ctx=1968, majf=0, minf=32769 00:21:43.147 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:21:43.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.147 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.147 issued rwts: total=473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.147 job2: (groupid=0, jobs=1): err= 0: pid=2983710: Mon Jul 15 10:29:18 2024 00:21:43.147 read: IOPS=46, BW=46.6MiB/s (48.8MB/s)(472MiB/10139msec) 00:21:43.147 slat (usec): min=38, max=185007, avg=21256.31, stdev=30102.93 00:21:43.147 clat (msec): min=103, max=4050, avg=2452.25, stdev=956.65 00:21:43.147 lat (msec): min=288, max=4053, avg=2473.51, stdev=954.57 00:21:43.147 clat percentiles (msec): 00:21:43.147 | 1.00th=[ 426], 5.00th=[ 810], 10.00th=[ 1217], 20.00th=[ 1418], 00:21:43.147 | 30.00th=[ 1636], 40.00th=[ 2106], 50.00th=[ 2702], 60.00th=[ 3171], 00:21:43.147 | 70.00th=[ 3306], 80.00th=[ 3339], 90.00th=[ 3440], 95.00th=[ 3540], 00:21:43.147 | 99.00th=[ 3943], 99.50th=[ 3977], 99.90th=[ 4044], 99.95th=[ 4044], 00:21:43.147 | 99.99th=[ 4044] 00:21:43.147 bw ( KiB/s): min= 2048, max=174080, per=0.96%, avg=43960.63, stdev=42902.83, samples=16 00:21:43.147 iops : min= 2, max= 170, avg=42.81, stdev=41.91, samples=16 00:21:43.147 lat (msec) : 250=0.21%, 500=1.91%, 750=2.12%, 1000=1.48%, 2000=33.05% 00:21:43.147 lat (msec) : >=2000=61.23% 00:21:43.147 cpu : usr=0.00%, sys=1.09%, ctx=1941, majf=0, minf=32769 00:21:43.147 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:21:43.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.147 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.147 issued rwts: total=472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.147 job2: (groupid=0, jobs=1): err= 0: pid=2983711: Mon Jul 15 10:29:18 2024 00:21:43.147 read: IOPS=48, BW=48.4MiB/s (50.8MB/s)(486MiB/10040msec) 00:21:43.147 slat (usec): min=226, max=171006, avg=20574.48, stdev=26470.94 00:21:43.147 clat (msec): min=38, max=4308, avg=2296.13, stdev=1121.11 00:21:43.147 lat (msec): min=63, max=4327, avg=2316.71, stdev=1121.61 00:21:43.147 clat percentiles (msec): 00:21:43.147 | 1.00th=[ 74], 5.00th=[ 397], 10.00th=[ 894], 20.00th=[ 1167], 00:21:43.147 | 30.00th=[ 1536], 40.00th=[ 1938], 50.00th=[ 2500], 60.00th=[ 2668], 00:21:43.147 | 70.00th=[ 2903], 80.00th=[ 3339], 90.00th=[ 3910], 95.00th=[ 4111], 00:21:43.147 | 99.00th=[ 4279], 99.50th=[ 4329], 99.90th=[ 4329], 99.95th=[ 4329], 00:21:43.147 | 99.99th=[ 4329] 00:21:43.147 bw ( KiB/s): min= 6144, max=161792, per=0.98%, avg=44779.00, stdev=39234.89, samples=15 00:21:43.147 iops : min= 6, max= 158, avg=43.60, stdev=38.39, samples=15 00:21:43.147 lat (msec) : 50=0.21%, 100=1.03%, 250=2.88%, 500=1.85%, 750=2.67% 00:21:43.147 lat (msec) : 1000=4.94%, 2000=26.75%, >=2000=59.67% 00:21:43.147 cpu : usr=0.01%, sys=0.77%, ctx=2002, majf=0, minf=32769 00:21:43.147 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.6%, >=64=87.0% 00:21:43.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.147 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.147 issued rwts: total=486,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.147 job2: (groupid=0, jobs=1): err= 0: pid=2983712: Mon Jul 15 10:29:18 2024 00:21:43.147 read: IOPS=137, BW=138MiB/s (144MB/s)(1448MiB/10511msec) 00:21:43.147 slat (usec): min=30, max=2073.9k, avg=7223.28, stdev=76564.66 00:21:43.147 clat (msec): min=45, max=4508, avg=900.26, stdev=1097.38 00:21:43.147 lat (msec): min=212, max=4511, avg=907.48, stdev=1100.64 00:21:43.147 clat percentiles (msec): 00:21:43.147 | 1.00th=[ 213], 5.00th=[ 213], 10.00th=[ 215], 20.00th=[ 224], 00:21:43.147 | 30.00th=[ 317], 40.00th=[ 372], 50.00th=[ 439], 60.00th=[ 776], 00:21:43.147 | 70.00th=[ 911], 80.00th=[ 1083], 90.00th=[ 1234], 95.00th=[ 4396], 00:21:43.147 | 99.00th=[ 4463], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:21:43.147 | 99.99th=[ 4530] 00:21:43.147 bw ( KiB/s): min=28672, max=575488, per=4.54%, avg=207968.08, stdev=156184.92, samples=13 00:21:43.147 iops : min= 28, max= 562, avg=203.08, stdev=152.54, samples=13 00:21:43.147 lat (msec) : 50=0.07%, 250=22.65%, 500=27.97%, 750=7.11%, 1000=17.40% 00:21:43.147 lat (msec) : 2000=15.06%, >=2000=9.74% 00:21:43.147 cpu : usr=0.05%, sys=2.36%, ctx=2172, majf=0, minf=32769 00:21:43.147 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:21:43.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.147 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.147 issued rwts: total=1448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.147 job2: (groupid=0, jobs=1): err= 0: pid=2983713: Mon Jul 15 10:29:18 2024 00:21:43.147 read: IOPS=114, BW=115MiB/s (120MB/s)(1149MiB/10010msec) 00:21:43.147 slat (usec): min=26, max=230860, avg=8698.45, stdev=17815.44 00:21:43.147 clat (msec): min=9, max=3075, avg=1021.00, stdev=784.25 00:21:43.147 lat (msec): min=9, max=3075, avg=1029.70, stdev=789.04 00:21:43.147 clat percentiles (msec): 00:21:43.147 | 1.00th=[ 34], 5.00th=[ 443], 10.00th=[ 477], 20.00th=[ 502], 00:21:43.147 | 30.00th=[ 506], 40.00th=[ 523], 50.00th=[ 550], 60.00th=[ 776], 00:21:43.147 | 70.00th=[ 1200], 80.00th=[ 1653], 90.00th=[ 2567], 95.00th=[ 2802], 00:21:43.147 | 99.00th=[ 2903], 99.50th=[ 3004], 99.90th=[ 3071], 99.95th=[ 3071], 00:21:43.147 | 99.99th=[ 3071] 00:21:43.147 bw ( KiB/s): min= 8192, max=260617, per=2.55%, avg=116757.71, stdev=97732.54, samples=17 00:21:43.147 iops : min= 8, max= 254, avg=113.94, stdev=95.42, samples=17 00:21:43.147 lat (msec) : 10=0.17%, 20=0.26%, 50=1.22%, 100=1.31%, 250=0.61% 00:21:43.147 lat (msec) : 500=16.97%, 750=38.82%, 1000=5.05%, 2000=20.71%, >=2000=14.88% 00:21:43.147 cpu : usr=0.08%, sys=1.57%, ctx=1996, majf=0, minf=32769 00:21:43.147 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.5% 00:21:43.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.147 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.147 issued rwts: total=1149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.147 job2: (groupid=0, jobs=1): err= 0: pid=2983714: Mon Jul 15 10:29:18 2024 00:21:43.147 read: IOPS=37, BW=37.6MiB/s (39.5MB/s)(381MiB/10123msec) 00:21:43.147 slat (usec): min=49, max=254716, avg=26264.17, stdev=32713.02 00:21:43.147 clat (msec): min=114, max=4538, avg=2817.03, stdev=1224.81 00:21:43.147 lat (msec): min=150, max=4539, avg=2843.30, stdev=1225.65 00:21:43.147 clat percentiles (msec): 00:21:43.147 | 1.00th=[ 182], 5.00th=[ 279], 10.00th=[ 659], 20.00th=[ 2072], 00:21:43.147 | 30.00th=[ 2534], 40.00th=[ 2601], 50.00th=[ 2735], 60.00th=[ 3071], 00:21:43.147 | 70.00th=[ 3742], 80.00th=[ 4111], 90.00th=[ 4329], 95.00th=[ 4463], 00:21:43.147 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:21:43.147 | 99.99th=[ 4530] 00:21:43.147 bw ( KiB/s): min= 8192, max=71680, per=0.79%, avg=36238.57, stdev=19391.66, samples=14 00:21:43.147 iops : min= 8, max= 70, avg=35.14, stdev=18.95, samples=14 00:21:43.147 lat (msec) : 250=4.20%, 500=4.72%, 750=1.84%, 1000=1.31%, 2000=7.61% 00:21:43.147 lat (msec) : >=2000=80.31% 00:21:43.147 cpu : usr=0.06%, sys=0.83%, ctx=1900, majf=0, minf=32769 00:21:43.147 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.4%, >=64=83.5% 00:21:43.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.147 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:21:43.147 issued rwts: total=381,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.147 job2: (groupid=0, jobs=1): err= 0: pid=2983715: Mon Jul 15 10:29:18 2024 00:21:43.147 read: IOPS=161, BW=162MiB/s (170MB/s)(1640MiB/10141msec) 00:21:43.147 slat (usec): min=24, max=97662, avg=6107.33, stdev=11528.73 00:21:43.147 clat (msec): min=114, max=2809, avg=750.47, stdev=527.48 00:21:43.147 lat (msec): min=140, max=2825, avg=756.58, stdev=529.81 00:21:43.147 clat percentiles (msec): 00:21:43.147 | 1.00th=[ 342], 5.00th=[ 405], 10.00th=[ 422], 20.00th=[ 489], 00:21:43.147 | 30.00th=[ 506], 40.00th=[ 542], 50.00th=[ 558], 60.00th=[ 617], 00:21:43.147 | 70.00th=[ 693], 80.00th=[ 785], 90.00th=[ 1083], 95.00th=[ 2265], 00:21:43.147 | 99.00th=[ 2802], 99.50th=[ 2802], 99.90th=[ 2802], 99.95th=[ 2802], 00:21:43.147 | 99.99th=[ 2802] 00:21:43.147 bw ( KiB/s): min= 4096, max=329728, per=3.55%, avg=162827.74, stdev=98008.93, samples=19 00:21:43.147 iops : min= 4, max= 322, avg=158.95, stdev=95.77, samples=19 00:21:43.147 lat (msec) : 250=0.43%, 500=27.13%, 750=51.46%, 1000=8.29%, 2000=6.04% 00:21:43.147 lat (msec) : >=2000=6.65% 00:21:43.147 cpu : usr=0.06%, sys=2.37%, ctx=2033, majf=0, minf=32769 00:21:43.147 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.2% 00:21:43.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.147 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.147 issued rwts: total=1640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.147 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.147 job3: (groupid=0, jobs=1): err= 0: pid=2983716: Mon Jul 15 10:29:18 2024 00:21:43.147 read: IOPS=7, BW=7816KiB/s (8004kB/s)(80.0MiB/10481msec) 00:21:43.148 slat (usec): min=688, max=2079.9k, avg=130401.44, stdev=475720.44 00:21:43.148 clat (msec): min=48, max=10474, avg=8041.16, stdev=2991.24 00:21:43.148 lat (msec): min=2092, max=10480, avg=8171.57, stdev=2863.04 00:21:43.148 clat percentiles (msec): 00:21:43.148 | 1.00th=[ 49], 5.00th=[ 2123], 10.00th=[ 2165], 20.00th=[ 4329], 00:21:43.148 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[10134], 60.00th=[10268], 00:21:43.148 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:21:43.148 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:43.148 | 99.99th=[10537] 00:21:43.148 lat (msec) : 50=1.25%, >=2000=98.75% 00:21:43.148 cpu : usr=0.01%, sys=0.79%, ctx=144, majf=0, minf=20481 00:21:43.148 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.0%, 32=40.0%, >=64=21.3% 00:21:43.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.148 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:43.148 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.148 job3: (groupid=0, jobs=1): err= 0: pid=2983717: Mon Jul 15 10:29:18 2024 00:21:43.148 read: IOPS=6, BW=6566KiB/s (6724kB/s)(67.0MiB/10449msec) 00:21:43.148 slat (usec): min=1485, max=2081.8k, avg=155214.92, stdev=518317.91 00:21:43.148 clat (msec): min=48, max=10438, avg=7754.19, stdev=3081.39 00:21:43.148 lat (msec): min=2092, max=10448, avg=7909.40, stdev=2946.31 00:21:43.148 clat percentiles (msec): 00:21:43.148 | 1.00th=[ 49], 5.00th=[ 2106], 10.00th=[ 2165], 20.00th=[ 4279], 00:21:43.148 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 8557], 60.00th=[10268], 00:21:43.148 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:21:43.148 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:21:43.148 | 99.99th=[10402] 00:21:43.148 lat (msec) : 50=1.49%, >=2000=98.51% 00:21:43.148 cpu : usr=0.00%, sys=0.65%, ctx=138, majf=0, minf=17153 00:21:43.148 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=11.9%, 16=23.9%, 32=47.8%, >=64=6.0% 00:21:43.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.148 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:43.148 issued rwts: total=67,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.148 job3: (groupid=0, jobs=1): err= 0: pid=2983718: Mon Jul 15 10:29:18 2024 00:21:43.148 read: IOPS=16, BW=16.4MiB/s (17.2MB/s)(167MiB/10192msec) 00:21:43.148 slat (usec): min=91, max=2132.5k, avg=60828.64, stdev=304248.38 00:21:43.148 clat (msec): min=32, max=9561, avg=7091.19, stdev=2958.20 00:21:43.148 lat (msec): min=1693, max=9573, avg=7152.02, stdev=2907.19 00:21:43.148 clat percentiles (msec): 00:21:43.148 | 1.00th=[ 1687], 5.00th=[ 1720], 10.00th=[ 1737], 20.00th=[ 3239], 00:21:43.148 | 30.00th=[ 8020], 40.00th=[ 8288], 50.00th=[ 8658], 60.00th=[ 8792], 00:21:43.148 | 70.00th=[ 9060], 80.00th=[ 9194], 90.00th=[ 9329], 95.00th=[ 9463], 00:21:43.148 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:21:43.148 | 99.99th=[ 9597] 00:21:43.148 bw ( KiB/s): min= 2048, max=36864, per=0.35%, avg=15974.40, stdev=15146.90, samples=5 00:21:43.148 iops : min= 2, max= 36, avg=15.60, stdev=14.79, samples=5 00:21:43.148 lat (msec) : 50=0.60%, 2000=18.56%, >=2000=80.84% 00:21:43.148 cpu : usr=0.00%, sys=0.69%, ctx=489, majf=0, minf=32769 00:21:43.148 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.6%, 32=19.2%, >=64=62.3% 00:21:43.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.148 complete : 0=0.0%, 4=97.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.4% 00:21:43.148 issued rwts: total=167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.148 job3: (groupid=0, jobs=1): err= 0: pid=2983719: Mon Jul 15 10:29:18 2024 00:21:43.148 read: IOPS=16, BW=16.8MiB/s (17.6MB/s)(172MiB/10225msec) 00:21:43.148 slat (usec): min=89, max=2156.2k, avg=59220.77, stdev=298392.33 00:21:43.148 clat (msec): min=37, max=9492, avg=6778.46, stdev=2920.16 00:21:43.148 lat (msec): min=1635, max=9497, avg=6837.68, stdev=2874.76 00:21:43.148 clat percentiles (msec): 00:21:43.148 | 1.00th=[ 1620], 5.00th=[ 1670], 10.00th=[ 1703], 20.00th=[ 2106], 00:21:43.148 | 30.00th=[ 6409], 40.00th=[ 8154], 50.00th=[ 8288], 60.00th=[ 8423], 00:21:43.148 | 70.00th=[ 8658], 80.00th=[ 8926], 90.00th=[ 9194], 95.00th=[ 9329], 00:21:43.148 | 99.00th=[ 9463], 99.50th=[ 9463], 99.90th=[ 9463], 99.95th=[ 9463], 00:21:43.148 | 99.99th=[ 9463] 00:21:43.148 bw ( KiB/s): min= 4096, max=43008, per=0.39%, avg=18012.40, stdev=16081.83, samples=5 00:21:43.148 iops : min= 4, max= 42, avg=17.40, stdev=15.61, samples=5 00:21:43.148 lat (msec) : 50=0.58%, 2000=19.19%, >=2000=80.23% 00:21:43.148 cpu : usr=0.01%, sys=0.76%, ctx=491, majf=0, minf=32769 00:21:43.148 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.7%, 16=9.3%, 32=18.6%, >=64=63.4% 00:21:43.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.148 complete : 0=0.0%, 4=97.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.2% 00:21:43.148 issued rwts: total=172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.148 job3: (groupid=0, jobs=1): err= 0: pid=2983720: Mon Jul 15 10:29:18 2024 00:21:43.148 read: IOPS=9, BW=9906KiB/s (10.1MB/s)(100MiB/10337msec) 00:21:43.148 slat (usec): min=306, max=2093.8k, avg=102911.08, stdev=396337.32 00:21:43.148 clat (msec): min=45, max=10332, avg=8892.03, stdev=2175.06 00:21:43.148 lat (msec): min=2028, max=10336, avg=8994.94, stdev=1987.64 00:21:43.148 clat percentiles (msec): 00:21:43.148 | 1.00th=[ 46], 5.00th=[ 2165], 10.00th=[ 6342], 20.00th=[ 9194], 00:21:43.148 | 30.00th=[ 9329], 40.00th=[ 9463], 50.00th=[ 9597], 60.00th=[ 9731], 00:21:43.148 | 70.00th=[ 9866], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:21:43.148 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:43.148 | 99.99th=[10268] 00:21:43.148 lat (msec) : 50=1.00%, >=2000=99.00% 00:21:43.148 cpu : usr=0.00%, sys=0.50%, ctx=394, majf=0, minf=25601 00:21:43.148 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.0%, 16=16.0%, 32=32.0%, >=64=37.0% 00:21:43.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.148 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:43.148 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.148 job3: (groupid=0, jobs=1): err= 0: pid=2983721: Mon Jul 15 10:29:18 2024 00:21:43.148 read: IOPS=133, BW=134MiB/s (140MB/s)(1347MiB/10064msec) 00:21:43.148 slat (usec): min=30, max=2131.6k, avg=7417.88, stdev=58525.06 00:21:43.148 clat (msec): min=63, max=2950, avg=921.92, stdev=714.21 00:21:43.148 lat (msec): min=65, max=2953, avg=929.34, stdev=716.82 00:21:43.148 clat percentiles (msec): 00:21:43.148 | 1.00th=[ 106], 5.00th=[ 330], 10.00th=[ 502], 20.00th=[ 510], 00:21:43.148 | 30.00th=[ 531], 40.00th=[ 584], 50.00th=[ 642], 60.00th=[ 693], 00:21:43.148 | 70.00th=[ 768], 80.00th=[ 1200], 90.00th=[ 2072], 95.00th=[ 2802], 00:21:43.148 | 99.00th=[ 2903], 99.50th=[ 2937], 99.90th=[ 2937], 99.95th=[ 2937], 00:21:43.148 | 99.99th=[ 2937] 00:21:43.148 bw ( KiB/s): min= 2048, max=255489, per=3.41%, avg=156128.06, stdev=87240.52, samples=16 00:21:43.148 iops : min= 2, max= 249, avg=152.44, stdev=85.16, samples=16 00:21:43.148 lat (msec) : 100=0.74%, 250=2.67%, 500=4.90%, 750=60.13%, 1000=9.28% 00:21:43.148 lat (msec) : 2000=11.43%, >=2000=10.84% 00:21:43.148 cpu : usr=0.10%, sys=2.49%, ctx=1612, majf=0, minf=32206 00:21:43.148 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.3% 00:21:43.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.148 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.148 issued rwts: total=1347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.148 job3: (groupid=0, jobs=1): err= 0: pid=2983722: Mon Jul 15 10:29:18 2024 00:21:43.148 read: IOPS=7, BW=7331KiB/s (7507kB/s)(75.0MiB/10476msec) 00:21:43.148 slat (usec): min=660, max=2112.1k, avg=139027.27, stdev=500140.26 00:21:43.148 clat (msec): min=47, max=10474, avg=9295.72, stdev=2374.11 00:21:43.148 lat (msec): min=2121, max=10475, avg=9434.75, stdev=2116.57 00:21:43.148 clat percentiles (msec): 00:21:43.148 | 1.00th=[ 48], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[ 8557], 00:21:43.148 | 30.00th=[10268], 40.00th=[10268], 50.00th=[10402], 60.00th=[10402], 00:21:43.148 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10402], 95.00th=[10537], 00:21:43.148 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:43.148 | 99.99th=[10537] 00:21:43.148 lat (msec) : 50=1.33%, >=2000=98.67% 00:21:43.148 cpu : usr=0.00%, sys=0.87%, ctx=129, majf=0, minf=19201 00:21:43.148 IO depths : 1=1.3%, 2=2.7%, 4=5.3%, 8=10.7%, 16=21.3%, 32=42.7%, >=64=16.0% 00:21:43.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.148 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:43.148 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.148 job3: (groupid=0, jobs=1): err= 0: pid=2983723: Mon Jul 15 10:29:18 2024 00:21:43.148 read: IOPS=90, BW=90.8MiB/s (95.2MB/s)(912MiB/10049msec) 00:21:43.148 slat (usec): min=29, max=1623.1k, avg=10984.69, stdev=75975.10 00:21:43.148 clat (msec): min=25, max=2905, avg=1084.50, stdev=658.86 00:21:43.148 lat (msec): min=92, max=2912, avg=1095.48, stdev=663.96 00:21:43.148 clat percentiles (msec): 00:21:43.148 | 1.00th=[ 106], 5.00th=[ 317], 10.00th=[ 625], 20.00th=[ 810], 00:21:43.148 | 30.00th=[ 818], 40.00th=[ 818], 50.00th=[ 827], 60.00th=[ 902], 00:21:43.148 | 70.00th=[ 961], 80.00th=[ 1028], 90.00th=[ 2567], 95.00th=[ 2601], 00:21:43.148 | 99.00th=[ 2635], 99.50th=[ 2869], 99.90th=[ 2903], 99.95th=[ 2903], 00:21:43.148 | 99.99th=[ 2903] 00:21:43.148 bw ( KiB/s): min=20439, max=157381, per=2.63%, avg=120608.08, stdev=49394.06, samples=12 00:21:43.148 iops : min= 19, max= 153, avg=117.58, stdev=48.34, samples=12 00:21:43.148 lat (msec) : 50=0.11%, 100=0.44%, 250=2.85%, 500=3.51%, 750=5.04% 00:21:43.148 lat (msec) : 1000=66.78%, 2000=6.58%, >=2000=14.69% 00:21:43.148 cpu : usr=0.03%, sys=1.41%, ctx=907, majf=0, minf=32769 00:21:43.148 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:21:43.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.148 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.148 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.148 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.148 job3: (groupid=0, jobs=1): err= 0: pid=2983724: Mon Jul 15 10:29:18 2024 00:21:43.148 read: IOPS=14, BW=14.9MiB/s (15.7MB/s)(156MiB/10452msec) 00:21:43.148 slat (usec): min=78, max=2154.4k, avg=66700.47, stdev=323399.25 00:21:43.148 clat (msec): min=45, max=10311, avg=7976.03, stdev=2813.20 00:21:43.148 lat (msec): min=2103, max=10317, avg=8042.73, stdev=2744.52 00:21:43.148 clat percentiles (msec): 00:21:43.148 | 1.00th=[ 2106], 5.00th=[ 2299], 10.00th=[ 2400], 20.00th=[ 6409], 00:21:43.148 | 30.00th=[ 8020], 40.00th=[ 8288], 50.00th=[ 8490], 60.00th=[ 9866], 00:21:43.148 | 70.00th=[10000], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:21:43.149 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:21:43.149 | 99.99th=[10268] 00:21:43.149 bw ( KiB/s): min= 2043, max=32768, per=0.25%, avg=11465.20, stdev=12256.19, samples=5 00:21:43.149 iops : min= 1, max= 32, avg=10.80, stdev=12.28, samples=5 00:21:43.149 lat (msec) : 50=0.64%, >=2000=99.36% 00:21:43.149 cpu : usr=0.00%, sys=1.16%, ctx=256, majf=0, minf=32769 00:21:43.149 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.1%, 16=10.3%, 32=20.5%, >=64=59.6% 00:21:43.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.149 complete : 0=0.0%, 4=96.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.3% 00:21:43.149 issued rwts: total=156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.149 job3: (groupid=0, jobs=1): err= 0: pid=2983725: Mon Jul 15 10:29:18 2024 00:21:43.149 read: IOPS=48, BW=48.2MiB/s (50.6MB/s)(494MiB/10246msec) 00:21:43.149 slat (usec): min=23, max=2074.3k, avg=20708.50, stdev=153365.43 00:21:43.149 clat (msec): min=13, max=7204, avg=2433.66, stdev=2334.65 00:21:43.149 lat (msec): min=534, max=7209, avg=2454.36, stdev=2339.02 00:21:43.149 clat percentiles (msec): 00:21:43.149 | 1.00th=[ 535], 5.00th=[ 542], 10.00th=[ 558], 20.00th=[ 785], 00:21:43.149 | 30.00th=[ 1011], 40.00th=[ 1116], 50.00th=[ 1150], 60.00th=[ 1217], 00:21:43.149 | 70.00th=[ 1989], 80.00th=[ 5403], 90.00th=[ 7013], 95.00th=[ 7148], 00:21:43.149 | 99.00th=[ 7215], 99.50th=[ 7215], 99.90th=[ 7215], 99.95th=[ 7215], 00:21:43.149 | 99.99th=[ 7215] 00:21:43.149 bw ( KiB/s): min= 4096, max=219136, per=1.82%, avg=83283.89, stdev=73266.03, samples=9 00:21:43.149 iops : min= 4, max= 214, avg=81.22, stdev=71.68, samples=9 00:21:43.149 lat (msec) : 20=0.20%, 750=15.59%, 1000=12.55%, 2000=42.51%, >=2000=29.15% 00:21:43.149 cpu : usr=0.04%, sys=0.80%, ctx=904, majf=0, minf=32769 00:21:43.149 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.2% 00:21:43.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.149 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.149 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.149 job3: (groupid=0, jobs=1): err= 0: pid=2983726: Mon Jul 15 10:29:18 2024 00:21:43.149 read: IOPS=26, BW=26.5MiB/s (27.8MB/s)(278MiB/10473msec) 00:21:43.149 slat (usec): min=27, max=2101.3k, avg=37509.76, stdev=227323.89 00:21:43.149 clat (msec): min=43, max=8313, avg=4374.14, stdev=3072.55 00:21:43.149 lat (msec): min=908, max=8316, avg=4411.65, stdev=3063.84 00:21:43.149 clat percentiles (msec): 00:21:43.149 | 1.00th=[ 902], 5.00th=[ 953], 10.00th=[ 1020], 20.00th=[ 1318], 00:21:43.149 | 30.00th=[ 1636], 40.00th=[ 1955], 50.00th=[ 4178], 60.00th=[ 6208], 00:21:43.149 | 70.00th=[ 7886], 80.00th=[ 8020], 90.00th=[ 8221], 95.00th=[ 8288], 00:21:43.149 | 99.00th=[ 8288], 99.50th=[ 8288], 99.90th=[ 8288], 99.95th=[ 8288], 00:21:43.149 | 99.99th=[ 8288] 00:21:43.149 bw ( KiB/s): min= 2048, max=155648, per=1.12%, avg=51200.00, stdev=59540.12, samples=6 00:21:43.149 iops : min= 2, max= 152, avg=50.00, stdev=58.14, samples=6 00:21:43.149 lat (msec) : 50=0.36%, 1000=8.63%, 2000=34.17%, >=2000=56.83% 00:21:43.149 cpu : usr=0.00%, sys=0.97%, ctx=725, majf=0, minf=32769 00:21:43.149 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.9%, 16=5.8%, 32=11.5%, >=64=77.3% 00:21:43.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.149 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:21:43.149 issued rwts: total=278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.149 job3: (groupid=0, jobs=1): err= 0: pid=2983727: Mon Jul 15 10:29:18 2024 00:21:43.149 read: IOPS=49, BW=49.6MiB/s (52.0MB/s)(516MiB/10411msec) 00:21:43.149 slat (usec): min=27, max=2094.2k, avg=20082.63, stdev=148185.48 00:21:43.149 clat (msec): min=45, max=6715, avg=1534.62, stdev=1538.35 00:21:43.149 lat (msec): min=656, max=6720, avg=1554.70, stdev=1553.75 00:21:43.149 clat percentiles (msec): 00:21:43.149 | 1.00th=[ 659], 5.00th=[ 684], 10.00th=[ 701], 20.00th=[ 735], 00:21:43.149 | 30.00th=[ 760], 40.00th=[ 793], 50.00th=[ 911], 60.00th=[ 927], 00:21:43.149 | 70.00th=[ 1485], 80.00th=[ 1720], 90.00th=[ 4933], 95.00th=[ 6409], 00:21:43.149 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:21:43.149 | 99.99th=[ 6745] 00:21:43.149 bw ( KiB/s): min=22528, max=190464, per=2.89%, avg=132437.33, stdev=66206.70, samples=6 00:21:43.149 iops : min= 22, max= 186, avg=129.33, stdev=64.65, samples=6 00:21:43.149 lat (msec) : 50=0.19%, 750=27.91%, 1000=36.82%, 2000=19.96%, >=2000=15.12% 00:21:43.149 cpu : usr=0.01%, sys=0.90%, ctx=709, majf=0, minf=32769 00:21:43.149 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.8% 00:21:43.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.149 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.149 issued rwts: total=516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.149 job3: (groupid=0, jobs=1): err= 0: pid=2983728: Mon Jul 15 10:29:18 2024 00:21:43.149 read: IOPS=8, BW=8292KiB/s (8491kB/s)(85.0MiB/10497msec) 00:21:43.149 slat (usec): min=342, max=2161.3k, avg=122924.77, stdev=463012.30 00:21:43.149 clat (msec): min=47, max=10494, avg=9322.67, stdev=2402.70 00:21:43.149 lat (msec): min=2090, max=10496, avg=9445.59, stdev=2179.44 00:21:43.149 clat percentiles (msec): 00:21:43.149 | 1.00th=[ 48], 5.00th=[ 2140], 10.00th=[ 6409], 20.00th=[ 8557], 00:21:43.149 | 30.00th=[10000], 40.00th=[10134], 50.00th=[10268], 60.00th=[10402], 00:21:43.149 | 70.00th=[10402], 80.00th=[10402], 90.00th=[10537], 95.00th=[10537], 00:21:43.149 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:21:43.149 | 99.99th=[10537] 00:21:43.149 lat (msec) : 50=1.18%, >=2000=98.82% 00:21:43.149 cpu : usr=0.01%, sys=0.76%, ctx=208, majf=0, minf=21761 00:21:43.149 IO depths : 1=1.2%, 2=2.4%, 4=4.7%, 8=9.4%, 16=18.8%, 32=37.6%, >=64=25.9% 00:21:43.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.149 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:21:43.149 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.149 job4: (groupid=0, jobs=1): err= 0: pid=2983729: Mon Jul 15 10:29:18 2024 00:21:43.149 read: IOPS=80, BW=80.5MiB/s (84.4MB/s)(807MiB/10026msec) 00:21:43.149 slat (usec): min=24, max=2034.7k, avg=12389.57, stdev=72878.61 00:21:43.149 clat (msec): min=24, max=4358, avg=1400.55, stdev=1073.26 00:21:43.149 lat (msec): min=26, max=4382, avg=1412.94, stdev=1078.16 00:21:43.149 clat percentiles (msec): 00:21:43.149 | 1.00th=[ 66], 5.00th=[ 153], 10.00th=[ 468], 20.00th=[ 567], 00:21:43.149 | 30.00th=[ 701], 40.00th=[ 944], 50.00th=[ 1083], 60.00th=[ 1183], 00:21:43.149 | 70.00th=[ 1502], 80.00th=[ 2198], 90.00th=[ 3104], 95.00th=[ 4077], 00:21:43.149 | 99.00th=[ 4279], 99.50th=[ 4329], 99.90th=[ 4329], 99.95th=[ 4329], 00:21:43.149 | 99.99th=[ 4329] 00:21:43.149 bw ( KiB/s): min= 6144, max=264192, per=2.05%, avg=93850.00, stdev=78938.12, samples=13 00:21:43.149 iops : min= 6, max= 258, avg=91.46, stdev=77.10, samples=13 00:21:43.149 lat (msec) : 50=0.74%, 100=2.11%, 250=4.96%, 500=3.59%, 750=19.45% 00:21:43.149 lat (msec) : 1000=15.12%, 2000=30.61%, >=2000=23.42% 00:21:43.149 cpu : usr=0.01%, sys=0.94%, ctx=2945, majf=0, minf=32769 00:21:43.149 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:21:43.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.149 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.149 issued rwts: total=807,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.149 job4: (groupid=0, jobs=1): err= 0: pid=2983730: Mon Jul 15 10:29:18 2024 00:21:43.149 read: IOPS=57, BW=57.4MiB/s (60.2MB/s)(577MiB/10053msec) 00:21:43.149 slat (usec): min=27, max=2034.9k, avg=17344.21, stdev=86313.50 00:21:43.149 clat (msec): min=43, max=4120, avg=2049.38, stdev=984.52 00:21:43.149 lat (msec): min=52, max=4180, avg=2066.72, stdev=985.34 00:21:43.149 clat percentiles (msec): 00:21:43.149 | 1.00th=[ 228], 5.00th=[ 676], 10.00th=[ 1020], 20.00th=[ 1301], 00:21:43.149 | 30.00th=[ 1636], 40.00th=[ 1787], 50.00th=[ 1854], 60.00th=[ 1921], 00:21:43.149 | 70.00th=[ 2072], 80.00th=[ 3239], 90.00th=[ 3943], 95.00th=[ 4044], 00:21:43.149 | 99.00th=[ 4111], 99.50th=[ 4111], 99.90th=[ 4111], 99.95th=[ 4111], 00:21:43.149 | 99.99th=[ 4111] 00:21:43.149 bw ( KiB/s): min= 4096, max=180224, per=1.43%, avg=65749.71, stdev=41108.77, samples=14 00:21:43.149 iops : min= 4, max= 176, avg=64.14, stdev=40.18, samples=14 00:21:43.149 lat (msec) : 50=0.17%, 100=0.17%, 250=1.04%, 500=2.43%, 750=1.39% 00:21:43.149 lat (msec) : 1000=3.64%, 2000=54.07%, >=2000=37.09% 00:21:43.149 cpu : usr=0.00%, sys=1.16%, ctx=2191, majf=0, minf=32769 00:21:43.149 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:21:43.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.149 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.149 issued rwts: total=577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.149 job4: (groupid=0, jobs=1): err= 0: pid=2983731: Mon Jul 15 10:29:18 2024 00:21:43.149 read: IOPS=86, BW=86.9MiB/s (91.1MB/s)(872MiB/10040msec) 00:21:43.149 slat (usec): min=27, max=2046.2k, avg=11479.57, stdev=70436.54 00:21:43.149 clat (msec): min=27, max=3368, avg=1355.43, stdev=834.44 00:21:43.149 lat (msec): min=50, max=3370, avg=1366.91, stdev=835.30 00:21:43.149 clat percentiles (msec): 00:21:43.149 | 1.00th=[ 144], 5.00th=[ 380], 10.00th=[ 439], 20.00th=[ 684], 00:21:43.149 | 30.00th=[ 844], 40.00th=[ 894], 50.00th=[ 1062], 60.00th=[ 1351], 00:21:43.149 | 70.00th=[ 1653], 80.00th=[ 2123], 90.00th=[ 2668], 95.00th=[ 3138], 00:21:43.149 | 99.00th=[ 3339], 99.50th=[ 3339], 99.90th=[ 3373], 99.95th=[ 3373], 00:21:43.149 | 99.99th=[ 3373] 00:21:43.149 bw ( KiB/s): min= 2048, max=311296, per=2.49%, avg=114186.77, stdev=87995.59, samples=13 00:21:43.149 iops : min= 2, max= 304, avg=111.46, stdev=85.89, samples=13 00:21:43.149 lat (msec) : 50=0.11%, 100=0.23%, 250=1.03%, 500=11.93%, 750=11.12% 00:21:43.149 lat (msec) : 1000=23.51%, 2000=29.36%, >=2000=22.71% 00:21:43.149 cpu : usr=0.01%, sys=1.34%, ctx=3560, majf=0, minf=32769 00:21:43.149 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.7%, >=64=92.8% 00:21:43.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.149 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.149 issued rwts: total=872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.149 job4: (groupid=0, jobs=1): err= 0: pid=2983732: Mon Jul 15 10:29:18 2024 00:21:43.149 read: IOPS=127, BW=127MiB/s (134MB/s)(1286MiB/10089msec) 00:21:43.149 slat (usec): min=24, max=2105.3k, avg=7802.88, stdev=69499.14 00:21:43.149 clat (msec): min=50, max=4335, avg=834.82, stdev=877.01 00:21:43.149 lat (msec): min=100, max=4340, avg=842.63, stdev=882.08 00:21:43.149 clat percentiles (msec): 00:21:43.149 | 1.00th=[ 232], 5.00th=[ 388], 10.00th=[ 401], 20.00th=[ 405], 00:21:43.150 | 30.00th=[ 443], 40.00th=[ 523], 50.00th=[ 625], 60.00th=[ 684], 00:21:43.150 | 70.00th=[ 735], 80.00th=[ 802], 90.00th=[ 1070], 95.00th=[ 3977], 00:21:43.150 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4329], 99.95th=[ 4329], 00:21:43.150 | 99.99th=[ 4329] 00:21:43.150 bw ( KiB/s): min=83968, max=329728, per=4.31%, avg=197632.00, stdev=77953.73, samples=12 00:21:43.150 iops : min= 82, max= 322, avg=193.00, stdev=76.13, samples=12 00:21:43.150 lat (msec) : 100=0.08%, 250=1.09%, 500=37.25%, 750=34.84%, 1000=14.85% 00:21:43.150 lat (msec) : 2000=4.98%, >=2000=6.92% 00:21:43.150 cpu : usr=0.02%, sys=1.65%, ctx=1518, majf=0, minf=32769 00:21:43.150 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:21:43.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.150 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.150 issued rwts: total=1286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.150 job4: (groupid=0, jobs=1): err= 0: pid=2983733: Mon Jul 15 10:29:18 2024 00:21:43.150 read: IOPS=75, BW=75.7MiB/s (79.4MB/s)(758MiB/10013msec) 00:21:43.150 slat (usec): min=24, max=2194.2k, avg=13189.86, stdev=80835.89 00:21:43.150 clat (msec): min=11, max=4282, avg=1555.76, stdev=1170.41 00:21:43.150 lat (msec): min=12, max=4297, avg=1568.95, stdev=1175.01 00:21:43.150 clat percentiles (msec): 00:21:43.150 | 1.00th=[ 18], 5.00th=[ 41], 10.00th=[ 73], 20.00th=[ 1028], 00:21:43.150 | 30.00th=[ 1133], 40.00th=[ 1183], 50.00th=[ 1267], 60.00th=[ 1418], 00:21:43.150 | 70.00th=[ 1536], 80.00th=[ 1636], 90.00th=[ 3842], 95.00th=[ 4111], 00:21:43.150 | 99.00th=[ 4279], 99.50th=[ 4279], 99.90th=[ 4279], 99.95th=[ 4279], 00:21:43.150 | 99.99th=[ 4279] 00:21:43.150 bw ( KiB/s): min= 2048, max=143360, per=1.64%, avg=75044.57, stdev=38353.35, samples=14 00:21:43.150 iops : min= 2, max= 140, avg=73.29, stdev=37.45, samples=14 00:21:43.150 lat (msec) : 20=1.45%, 50=4.88%, 100=4.62%, 250=2.24%, 500=2.37% 00:21:43.150 lat (msec) : 750=1.58%, 1000=1.98%, 2000=64.12%, >=2000=16.75% 00:21:43.150 cpu : usr=0.02%, sys=1.16%, ctx=2376, majf=0, minf=32769 00:21:43.150 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:21:43.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.150 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.150 issued rwts: total=758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.150 job4: (groupid=0, jobs=1): err= 0: pid=2983734: Mon Jul 15 10:29:18 2024 00:21:43.150 read: IOPS=71, BW=71.4MiB/s (74.9MB/s)(715MiB/10015msec) 00:21:43.150 slat (usec): min=30, max=2043.4k, avg=13985.13, stdev=77908.56 00:21:43.150 clat (msec): min=12, max=4274, avg=1678.95, stdev=1058.49 00:21:43.150 lat (msec): min=14, max=4281, avg=1692.94, stdev=1060.71 00:21:43.150 clat percentiles (msec): 00:21:43.150 | 1.00th=[ 27], 5.00th=[ 169], 10.00th=[ 550], 20.00th=[ 944], 00:21:43.150 | 30.00th=[ 1267], 40.00th=[ 1334], 50.00th=[ 1469], 60.00th=[ 1552], 00:21:43.150 | 70.00th=[ 1670], 80.00th=[ 2140], 90.00th=[ 3641], 95.00th=[ 4144], 00:21:43.150 | 99.00th=[ 4245], 99.50th=[ 4245], 99.90th=[ 4279], 99.95th=[ 4279], 00:21:43.150 | 99.99th=[ 4279] 00:21:43.150 bw ( KiB/s): min= 2048, max=223232, per=1.68%, avg=76946.29, stdev=49763.35, samples=14 00:21:43.150 iops : min= 2, max= 218, avg=75.14, stdev=48.60, samples=14 00:21:43.150 lat (msec) : 20=0.42%, 50=2.66%, 100=0.98%, 250=1.68%, 500=2.94% 00:21:43.150 lat (msec) : 750=6.29%, 1000=6.29%, 2000=56.78%, >=2000=21.96% 00:21:43.150 cpu : usr=0.03%, sys=1.16%, ctx=2401, majf=0, minf=32769 00:21:43.150 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:21:43.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.150 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.150 issued rwts: total=715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.150 job4: (groupid=0, jobs=1): err= 0: pid=2983735: Mon Jul 15 10:29:18 2024 00:21:43.150 read: IOPS=55, BW=55.8MiB/s (58.5MB/s)(560MiB/10039msec) 00:21:43.150 slat (usec): min=29, max=2152.7k, avg=17853.25, stdev=91681.52 00:21:43.150 clat (msec): min=37, max=4598, avg=2130.93, stdev=1270.44 00:21:43.150 lat (msec): min=86, max=4602, avg=2148.78, stdev=1273.51 00:21:43.150 clat percentiles (msec): 00:21:43.150 | 1.00th=[ 176], 5.00th=[ 625], 10.00th=[ 894], 20.00th=[ 1452], 00:21:43.150 | 30.00th=[ 1502], 40.00th=[ 1552], 50.00th=[ 1653], 60.00th=[ 1754], 00:21:43.150 | 70.00th=[ 1871], 80.00th=[ 4111], 90.00th=[ 4463], 95.00th=[ 4530], 00:21:43.150 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 4597], 99.95th=[ 4597], 00:21:43.150 | 99.99th=[ 4597] 00:21:43.150 bw ( KiB/s): min=16384, max=100352, per=1.38%, avg=63215.21, stdev=26229.37, samples=14 00:21:43.150 iops : min= 16, max= 98, avg=61.64, stdev=25.67, samples=14 00:21:43.150 lat (msec) : 50=0.18%, 100=0.36%, 250=1.25%, 500=1.96%, 750=4.46% 00:21:43.150 lat (msec) : 1000=2.68%, 2000=64.46%, >=2000=24.64% 00:21:43.150 cpu : usr=0.04%, sys=1.66%, ctx=1546, majf=0, minf=32769 00:21:43.150 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.8% 00:21:43.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.150 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.150 issued rwts: total=560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.150 job4: (groupid=0, jobs=1): err= 0: pid=2983736: Mon Jul 15 10:29:18 2024 00:21:43.150 read: IOPS=77, BW=77.5MiB/s (81.3MB/s)(781MiB/10078msec) 00:21:43.150 slat (usec): min=26, max=1850.8k, avg=12826.59, stdev=67734.82 00:21:43.150 clat (msec): min=56, max=3894, avg=1588.06, stdev=992.89 00:21:43.150 lat (msec): min=95, max=3902, avg=1600.89, stdev=998.70 00:21:43.150 clat percentiles (msec): 00:21:43.150 | 1.00th=[ 103], 5.00th=[ 234], 10.00th=[ 418], 20.00th=[ 523], 00:21:43.150 | 30.00th=[ 944], 40.00th=[ 1318], 50.00th=[ 1368], 60.00th=[ 1871], 00:21:43.150 | 70.00th=[ 2056], 80.00th=[ 2299], 90.00th=[ 3104], 95.00th=[ 3406], 00:21:43.150 | 99.00th=[ 3842], 99.50th=[ 3876], 99.90th=[ 3910], 99.95th=[ 3910], 00:21:43.150 | 99.99th=[ 3910] 00:21:43.150 bw ( KiB/s): min=18432, max=249856, per=1.82%, avg=83557.06, stdev=56135.38, samples=16 00:21:43.150 iops : min= 18, max= 244, avg=81.50, stdev=54.79, samples=16 00:21:43.150 lat (msec) : 100=0.90%, 250=4.99%, 500=7.81%, 750=13.06%, 1000=4.35% 00:21:43.150 lat (msec) : 2000=36.24%, >=2000=32.65% 00:21:43.150 cpu : usr=0.03%, sys=1.73%, ctx=2083, majf=0, minf=32769 00:21:43.150 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=91.9% 00:21:43.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.150 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.150 issued rwts: total=781,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.150 job4: (groupid=0, jobs=1): err= 0: pid=2983737: Mon Jul 15 10:29:18 2024 00:21:43.150 read: IOPS=76, BW=76.3MiB/s (80.0MB/s)(768MiB/10070msec) 00:21:43.150 slat (usec): min=27, max=2192.9k, avg=13058.16, stdev=79809.22 00:21:43.150 clat (msec): min=37, max=3879, avg=1579.67, stdev=860.83 00:21:43.150 lat (msec): min=69, max=3880, avg=1592.73, stdev=863.35 00:21:43.150 clat percentiles (msec): 00:21:43.150 | 1.00th=[ 94], 5.00th=[ 651], 10.00th=[ 827], 20.00th=[ 953], 00:21:43.150 | 30.00th=[ 1200], 40.00th=[ 1267], 50.00th=[ 1318], 60.00th=[ 1401], 00:21:43.150 | 70.00th=[ 1636], 80.00th=[ 1821], 90.00th=[ 3306], 95.00th=[ 3339], 00:21:43.150 | 99.00th=[ 3742], 99.50th=[ 3842], 99.90th=[ 3876], 99.95th=[ 3876], 00:21:43.150 | 99.99th=[ 3876] 00:21:43.150 bw ( KiB/s): min=32768, max=163840, per=2.04%, avg=93569.64, stdev=39921.76, samples=14 00:21:43.150 iops : min= 32, max= 160, avg=91.29, stdev=39.04, samples=14 00:21:43.150 lat (msec) : 50=0.13%, 100=0.91%, 250=0.52%, 500=1.43%, 750=3.52% 00:21:43.150 lat (msec) : 1000=15.10%, 2000=61.85%, >=2000=16.54% 00:21:43.150 cpu : usr=0.04%, sys=1.63%, ctx=2109, majf=0, minf=32769 00:21:43.150 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:21:43.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.150 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.150 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.150 job4: (groupid=0, jobs=1): err= 0: pid=2983738: Mon Jul 15 10:29:18 2024 00:21:43.150 read: IOPS=64, BW=64.7MiB/s (67.9MB/s)(650MiB/10042msec) 00:21:43.150 slat (usec): min=25, max=2055.1k, avg=15385.95, stdev=82461.52 00:21:43.150 clat (msec): min=38, max=4464, avg=1740.85, stdev=1093.37 00:21:43.150 lat (msec): min=41, max=4478, avg=1756.23, stdev=1098.42 00:21:43.150 clat percentiles (msec): 00:21:43.150 | 1.00th=[ 68], 5.00th=[ 255], 10.00th=[ 414], 20.00th=[ 776], 00:21:43.150 | 30.00th=[ 835], 40.00th=[ 1217], 50.00th=[ 1737], 60.00th=[ 2022], 00:21:43.150 | 70.00th=[ 2400], 80.00th=[ 2567], 90.00th=[ 3138], 95.00th=[ 3775], 00:21:43.150 | 99.00th=[ 4329], 99.50th=[ 4396], 99.90th=[ 4463], 99.95th=[ 4463], 00:21:43.150 | 99.99th=[ 4463] 00:21:43.150 bw ( KiB/s): min= 8192, max=194171, per=1.94%, avg=88862.92, stdev=56841.16, samples=12 00:21:43.150 iops : min= 8, max= 189, avg=86.67, stdev=55.31, samples=12 00:21:43.150 lat (msec) : 50=0.31%, 100=1.85%, 250=2.62%, 500=7.23%, 750=6.77% 00:21:43.150 lat (msec) : 1000=16.46%, 2000=24.15%, >=2000=40.62% 00:21:43.150 cpu : usr=0.01%, sys=1.03%, ctx=2094, majf=0, minf=32769 00:21:43.150 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:21:43.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.150 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.150 issued rwts: total=650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.150 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.150 job4: (groupid=0, jobs=1): err= 0: pid=2983739: Mon Jul 15 10:29:18 2024 00:21:43.150 read: IOPS=50, BW=50.8MiB/s (53.3MB/s)(510MiB/10039msec) 00:21:43.150 slat (usec): min=47, max=2145.8k, avg=19605.76, stdev=96849.00 00:21:43.150 clat (msec): min=37, max=4541, avg=2242.39, stdev=1264.67 00:21:43.150 lat (msec): min=42, max=4548, avg=2262.00, stdev=1269.22 00:21:43.150 clat percentiles (msec): 00:21:43.150 | 1.00th=[ 48], 5.00th=[ 136], 10.00th=[ 300], 20.00th=[ 1301], 00:21:43.150 | 30.00th=[ 1871], 40.00th=[ 2022], 50.00th=[ 2123], 60.00th=[ 2198], 00:21:43.150 | 70.00th=[ 2299], 80.00th=[ 3775], 90.00th=[ 4178], 95.00th=[ 4463], 00:21:43.150 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:21:43.150 | 99.99th=[ 4530] 00:21:43.150 bw ( KiB/s): min= 6144, max=94208, per=1.11%, avg=50877.00, stdev=24587.94, samples=13 00:21:43.150 iops : min= 6, max= 92, avg=49.62, stdev=24.01, samples=13 00:21:43.150 lat (msec) : 50=1.18%, 100=2.35%, 250=4.90%, 500=3.33%, 750=3.33% 00:21:43.150 lat (msec) : 1000=2.75%, 2000=20.00%, >=2000=62.16% 00:21:43.150 cpu : usr=0.00%, sys=0.92%, ctx=1963, majf=0, minf=32769 00:21:43.150 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:21:43.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.150 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.150 issued rwts: total=510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.151 job4: (groupid=0, jobs=1): err= 0: pid=2983740: Mon Jul 15 10:29:18 2024 00:21:43.151 read: IOPS=58, BW=58.6MiB/s (61.5MB/s)(588MiB/10027msec) 00:21:43.151 slat (usec): min=32, max=1958.3k, avg=17004.08, stdev=110880.49 00:21:43.151 clat (msec): min=26, max=5278, avg=1979.41, stdev=1574.20 00:21:43.151 lat (msec): min=30, max=5291, avg=1996.41, stdev=1579.73 00:21:43.151 clat percentiles (msec): 00:21:43.151 | 1.00th=[ 40], 5.00th=[ 122], 10.00th=[ 493], 20.00th=[ 927], 00:21:43.151 | 30.00th=[ 1036], 40.00th=[ 1200], 50.00th=[ 1435], 60.00th=[ 1536], 00:21:43.151 | 70.00th=[ 1670], 80.00th=[ 3239], 90.00th=[ 5134], 95.00th=[ 5201], 00:21:43.151 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:21:43.151 | 99.99th=[ 5269] 00:21:43.151 bw ( KiB/s): min=18432, max=158012, per=1.62%, avg=74306.27, stdev=41347.09, samples=11 00:21:43.151 iops : min= 18, max= 154, avg=72.45, stdev=40.37, samples=11 00:21:43.151 lat (msec) : 50=2.21%, 100=1.87%, 250=2.38%, 500=3.57%, 750=4.93% 00:21:43.151 lat (msec) : 1000=13.61%, 2000=43.20%, >=2000=28.23% 00:21:43.151 cpu : usr=0.00%, sys=0.99%, ctx=1878, majf=0, minf=32769 00:21:43.151 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3% 00:21:43.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.151 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.151 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.151 job4: (groupid=0, jobs=1): err= 0: pid=2983741: Mon Jul 15 10:29:18 2024 00:21:43.151 read: IOPS=57, BW=57.3MiB/s (60.1MB/s)(574MiB/10014msec) 00:21:43.151 slat (usec): min=331, max=2071.4k, avg=17420.73, stdev=88067.72 00:21:43.151 clat (msec): min=12, max=3721, avg=1991.24, stdev=1005.55 00:21:43.151 lat (msec): min=14, max=3727, avg=2008.66, stdev=1007.91 00:21:43.151 clat percentiles (msec): 00:21:43.151 | 1.00th=[ 22], 5.00th=[ 186], 10.00th=[ 451], 20.00th=[ 1452], 00:21:43.151 | 30.00th=[ 1653], 40.00th=[ 1770], 50.00th=[ 1838], 60.00th=[ 1972], 00:21:43.151 | 70.00th=[ 2165], 80.00th=[ 3440], 90.00th=[ 3574], 95.00th=[ 3608], 00:21:43.151 | 99.00th=[ 3675], 99.50th=[ 3708], 99.90th=[ 3708], 99.95th=[ 3708], 00:21:43.151 | 99.99th=[ 3708] 00:21:43.151 bw ( KiB/s): min= 2048, max=110592, per=1.43%, avg=65694.67, stdev=30667.27, samples=12 00:21:43.151 iops : min= 2, max= 108, avg=64.08, stdev=29.93, samples=12 00:21:43.151 lat (msec) : 20=0.70%, 50=2.09%, 100=0.87%, 250=3.83%, 500=3.31% 00:21:43.151 lat (msec) : 750=0.87%, 1000=2.09%, 2000=47.04%, >=2000=39.20% 00:21:43.151 cpu : usr=0.01%, sys=0.83%, ctx=2313, majf=0, minf=32769 00:21:43.151 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:21:43.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.151 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.151 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.151 job5: (groupid=0, jobs=1): err= 0: pid=2983742: Mon Jul 15 10:29:18 2024 00:21:43.151 read: IOPS=61, BW=61.2MiB/s (64.2MB/s)(619MiB/10107msec) 00:21:43.151 slat (usec): min=27, max=187264, avg=16227.99, stdev=23763.04 00:21:43.151 clat (msec): min=58, max=4006, avg=1933.04, stdev=989.33 00:21:43.151 lat (msec): min=152, max=4041, avg=1949.27, stdev=993.53 00:21:43.151 clat percentiles (msec): 00:21:43.151 | 1.00th=[ 279], 5.00th=[ 676], 10.00th=[ 1099], 20.00th=[ 1234], 00:21:43.151 | 30.00th=[ 1318], 40.00th=[ 1385], 50.00th=[ 1502], 60.00th=[ 1737], 00:21:43.151 | 70.00th=[ 2165], 80.00th=[ 2937], 90.00th=[ 3742], 95.00th=[ 3842], 00:21:43.151 | 99.00th=[ 3943], 99.50th=[ 3977], 99.90th=[ 4010], 99.95th=[ 4010], 00:21:43.151 | 99.99th=[ 4010] 00:21:43.151 bw ( KiB/s): min=18432, max=118546, per=1.22%, avg=55718.44, stdev=29369.61, samples=18 00:21:43.151 iops : min= 18, max= 115, avg=54.28, stdev=28.63, samples=18 00:21:43.151 lat (msec) : 100=0.16%, 250=0.81%, 500=2.10%, 750=2.58%, 1000=2.91% 00:21:43.151 lat (msec) : 2000=57.03%, >=2000=34.41% 00:21:43.151 cpu : usr=0.03%, sys=1.15%, ctx=1811, majf=0, minf=32769 00:21:43.151 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:21:43.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.151 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.151 issued rwts: total=619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.151 job5: (groupid=0, jobs=1): err= 0: pid=2983743: Mon Jul 15 10:29:18 2024 00:21:43.151 read: IOPS=25, BW=25.9MiB/s (27.1MB/s)(260MiB/10054msec) 00:21:43.151 slat (usec): min=269, max=1543.1k, avg=38458.92, stdev=102391.84 00:21:43.151 clat (msec): min=52, max=5550, avg=3520.96, stdev=1564.00 00:21:43.151 lat (msec): min=54, max=5565, avg=3559.42, stdev=1559.48 00:21:43.151 clat percentiles (msec): 00:21:43.151 | 1.00th=[ 56], 5.00th=[ 245], 10.00th=[ 776], 20.00th=[ 2072], 00:21:43.151 | 30.00th=[ 3104], 40.00th=[ 3205], 50.00th=[ 3775], 60.00th=[ 4463], 00:21:43.151 | 70.00th=[ 4597], 80.00th=[ 4866], 90.00th=[ 5269], 95.00th=[ 5470], 00:21:43.151 | 99.00th=[ 5537], 99.50th=[ 5537], 99.90th=[ 5537], 99.95th=[ 5537], 00:21:43.151 | 99.99th=[ 5537] 00:21:43.151 bw ( KiB/s): min= 6144, max=45056, per=0.50%, avg=22694.92, stdev=11567.46, samples=12 00:21:43.151 iops : min= 6, max= 44, avg=22.08, stdev=11.30, samples=12 00:21:43.151 lat (msec) : 100=2.31%, 250=2.69%, 500=2.31%, 750=1.54%, 1000=2.69% 00:21:43.151 lat (msec) : 2000=8.08%, >=2000=80.38% 00:21:43.151 cpu : usr=0.01%, sys=0.69%, ctx=1201, majf=0, minf=32769 00:21:43.151 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.2%, 32=12.3%, >=64=75.8% 00:21:43.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.151 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:21:43.151 issued rwts: total=260,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.151 job5: (groupid=0, jobs=1): err= 0: pid=2983744: Mon Jul 15 10:29:18 2024 00:21:43.151 read: IOPS=80, BW=80.4MiB/s (84.3MB/s)(815MiB/10140msec) 00:21:43.151 slat (usec): min=25, max=1514.9k, avg=12366.34, stdev=56335.40 00:21:43.151 clat (msec): min=58, max=2695, avg=1357.62, stdev=616.58 00:21:43.151 lat (msec): min=141, max=2703, avg=1369.99, stdev=617.87 00:21:43.151 clat percentiles (msec): 00:21:43.151 | 1.00th=[ 215], 5.00th=[ 659], 10.00th=[ 776], 20.00th=[ 860], 00:21:43.151 | 30.00th=[ 927], 40.00th=[ 978], 50.00th=[ 1099], 60.00th=[ 1351], 00:21:43.151 | 70.00th=[ 1703], 80.00th=[ 2022], 90.00th=[ 2366], 95.00th=[ 2467], 00:21:43.151 | 99.00th=[ 2635], 99.50th=[ 2668], 99.90th=[ 2702], 99.95th=[ 2702], 00:21:43.151 | 99.99th=[ 2702] 00:21:43.151 bw ( KiB/s): min=24576, max=183952, per=1.91%, avg=87477.44, stdev=49406.45, samples=16 00:21:43.151 iops : min= 24, max= 179, avg=85.38, stdev=48.17, samples=16 00:21:43.151 lat (msec) : 100=0.12%, 250=1.23%, 500=1.23%, 750=5.15%, 1000=35.21% 00:21:43.151 lat (msec) : 2000=36.32%, >=2000=20.74% 00:21:43.151 cpu : usr=0.06%, sys=1.24%, ctx=1863, majf=0, minf=32769 00:21:43.151 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:21:43.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.151 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.151 issued rwts: total=815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.151 job5: (groupid=0, jobs=1): err= 0: pid=2983745: Mon Jul 15 10:29:18 2024 00:21:43.151 read: IOPS=94, BW=94.6MiB/s (99.2MB/s)(957MiB/10115msec) 00:21:43.151 slat (usec): min=25, max=613431, avg=10460.09, stdev=28993.17 00:21:43.151 clat (msec): min=101, max=3368, avg=1294.44, stdev=736.30 00:21:43.151 lat (msec): min=160, max=3388, avg=1304.90, stdev=739.76 00:21:43.151 clat percentiles (msec): 00:21:43.151 | 1.00th=[ 271], 5.00th=[ 693], 10.00th=[ 709], 20.00th=[ 735], 00:21:43.151 | 30.00th=[ 760], 40.00th=[ 785], 50.00th=[ 936], 60.00th=[ 1368], 00:21:43.151 | 70.00th=[ 1435], 80.00th=[ 1787], 90.00th=[ 2500], 95.00th=[ 3004], 00:21:43.151 | 99.00th=[ 3272], 99.50th=[ 3306], 99.90th=[ 3373], 99.95th=[ 3373], 00:21:43.151 | 99.99th=[ 3373] 00:21:43.151 bw ( KiB/s): min=20480, max=206848, per=1.95%, avg=89259.16, stdev=60920.08, samples=19 00:21:43.151 iops : min= 20, max= 202, avg=87.00, stdev=59.48, samples=19 00:21:43.151 lat (msec) : 250=0.52%, 500=1.46%, 750=25.18%, 1000=24.97%, 2000=30.41% 00:21:43.151 lat (msec) : >=2000=17.45% 00:21:43.151 cpu : usr=0.00%, sys=1.43%, ctx=1709, majf=0, minf=32769 00:21:43.151 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.7%, 32=3.3%, >=64=93.4% 00:21:43.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.151 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.151 issued rwts: total=957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.151 job5: (groupid=0, jobs=1): err= 0: pid=2983746: Mon Jul 15 10:29:18 2024 00:21:43.151 read: IOPS=49, BW=49.1MiB/s (51.5MB/s)(494MiB/10059msec) 00:21:43.151 slat (usec): min=34, max=177391, avg=20239.26, stdev=26163.94 00:21:43.151 clat (msec): min=58, max=3855, avg=2235.13, stdev=887.83 00:21:43.151 lat (msec): min=135, max=3865, avg=2255.37, stdev=887.44 00:21:43.151 clat percentiles (msec): 00:21:43.151 | 1.00th=[ 176], 5.00th=[ 693], 10.00th=[ 978], 20.00th=[ 1351], 00:21:43.151 | 30.00th=[ 1754], 40.00th=[ 2165], 50.00th=[ 2333], 60.00th=[ 2567], 00:21:43.151 | 70.00th=[ 2836], 80.00th=[ 3071], 90.00th=[ 3373], 95.00th=[ 3574], 00:21:43.151 | 99.00th=[ 3809], 99.50th=[ 3842], 99.90th=[ 3842], 99.95th=[ 3842], 00:21:43.152 | 99.99th=[ 3842] 00:21:43.152 bw ( KiB/s): min=16384, max=178176, per=1.09%, avg=50086.27, stdev=39957.13, samples=15 00:21:43.152 iops : min= 16, max= 174, avg=48.80, stdev=39.08, samples=15 00:21:43.152 lat (msec) : 100=0.20%, 250=1.01%, 500=0.81%, 750=3.64%, 1000=5.26% 00:21:43.152 lat (msec) : 2000=27.13%, >=2000=61.94% 00:21:43.152 cpu : usr=0.02%, sys=1.50%, ctx=1830, majf=0, minf=32769 00:21:43.152 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.2% 00:21:43.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.152 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.152 issued rwts: total=494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.152 job5: (groupid=0, jobs=1): err= 0: pid=2983747: Mon Jul 15 10:29:18 2024 00:21:43.152 read: IOPS=55, BW=55.7MiB/s (58.5MB/s)(559MiB/10028msec) 00:21:43.152 slat (usec): min=29, max=355613, avg=17887.50, stdev=32059.18 00:21:43.152 clat (msec): min=26, max=4235, avg=1941.90, stdev=1258.31 00:21:43.152 lat (msec): min=28, max=4307, avg=1959.79, stdev=1263.55 00:21:43.152 clat percentiles (msec): 00:21:43.152 | 1.00th=[ 44], 5.00th=[ 296], 10.00th=[ 535], 20.00th=[ 625], 00:21:43.152 | 30.00th=[ 902], 40.00th=[ 1183], 50.00th=[ 1787], 60.00th=[ 2165], 00:21:43.152 | 70.00th=[ 2836], 80.00th=[ 3708], 90.00th=[ 3809], 95.00th=[ 3876], 00:21:43.152 | 99.00th=[ 3977], 99.50th=[ 4044], 99.90th=[ 4245], 99.95th=[ 4245], 00:21:43.152 | 99.99th=[ 4245] 00:21:43.152 bw ( KiB/s): min= 4096, max=225280, per=1.26%, avg=57925.00, stdev=62825.59, samples=14 00:21:43.152 iops : min= 4, max= 220, avg=56.50, stdev=61.39, samples=14 00:21:43.152 lat (msec) : 50=1.79%, 100=0.72%, 250=1.61%, 500=2.15%, 750=17.53% 00:21:43.152 lat (msec) : 1000=8.77%, 2000=22.00%, >=2000=45.44% 00:21:43.152 cpu : usr=0.02%, sys=1.13%, ctx=1749, majf=0, minf=32769 00:21:43.152 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:21:43.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.152 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.152 issued rwts: total=559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.152 job5: (groupid=0, jobs=1): err= 0: pid=2983748: Mon Jul 15 10:29:18 2024 00:21:43.152 read: IOPS=119, BW=120MiB/s (126MB/s)(1203MiB/10048msec) 00:21:43.152 slat (usec): min=26, max=109268, avg=8317.79, stdev=14044.86 00:21:43.152 clat (msec): min=35, max=3375, avg=1015.27, stdev=718.59 00:21:43.152 lat (msec): min=58, max=3387, avg=1023.58, stdev=721.67 00:21:43.152 clat percentiles (msec): 00:21:43.152 | 1.00th=[ 228], 5.00th=[ 393], 10.00th=[ 401], 20.00th=[ 409], 00:21:43.152 | 30.00th=[ 435], 40.00th=[ 567], 50.00th=[ 827], 60.00th=[ 1099], 00:21:43.152 | 70.00th=[ 1234], 80.00th=[ 1418], 90.00th=[ 1972], 95.00th=[ 2668], 00:21:43.152 | 99.00th=[ 3306], 99.50th=[ 3339], 99.90th=[ 3373], 99.95th=[ 3373], 00:21:43.152 | 99.99th=[ 3373] 00:21:43.152 bw ( KiB/s): min=32768, max=346112, per=2.77%, avg=127079.59, stdev=102574.87, samples=17 00:21:43.152 iops : min= 32, max= 338, avg=124.06, stdev=100.16, samples=17 00:21:43.152 lat (msec) : 50=0.08%, 100=0.33%, 250=0.75%, 500=35.16%, 750=8.89% 00:21:43.152 lat (msec) : 1000=10.64%, 2000=34.41%, >=2000=9.73% 00:21:43.152 cpu : usr=0.05%, sys=2.00%, ctx=2187, majf=0, minf=32769 00:21:43.152 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.8% 00:21:43.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.152 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.152 issued rwts: total=1203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.152 job5: (groupid=0, jobs=1): err= 0: pid=2983749: Mon Jul 15 10:29:18 2024 00:21:43.152 read: IOPS=100, BW=100MiB/s (105MB/s)(1009MiB/10071msec) 00:21:43.152 slat (usec): min=33, max=81119, avg=9910.61, stdev=12862.68 00:21:43.152 clat (msec): min=63, max=2734, avg=1097.85, stdev=512.03 00:21:43.152 lat (msec): min=78, max=2750, avg=1107.76, stdev=516.43 00:21:43.152 clat percentiles (msec): 00:21:43.152 | 1.00th=[ 113], 5.00th=[ 388], 10.00th=[ 718], 20.00th=[ 827], 00:21:43.152 | 30.00th=[ 852], 40.00th=[ 919], 50.00th=[ 978], 60.00th=[ 1011], 00:21:43.152 | 70.00th=[ 1053], 80.00th=[ 1334], 90.00th=[ 1905], 95.00th=[ 2265], 00:21:43.152 | 99.00th=[ 2567], 99.50th=[ 2635], 99.90th=[ 2702], 99.95th=[ 2735], 00:21:43.152 | 99.99th=[ 2735] 00:21:43.152 bw ( KiB/s): min=45056, max=157696, per=2.62%, avg=120104.87, stdev=37508.91, samples=15 00:21:43.152 iops : min= 44, max= 154, avg=117.27, stdev=36.61, samples=15 00:21:43.152 lat (msec) : 100=0.69%, 250=1.98%, 500=4.06%, 750=3.57%, 1000=47.18% 00:21:43.152 lat (msec) : 2000=33.30%, >=2000=9.22% 00:21:43.152 cpu : usr=0.08%, sys=2.04%, ctx=1402, majf=0, minf=32769 00:21:43.152 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:21:43.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.152 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.152 issued rwts: total=1009,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.152 job5: (groupid=0, jobs=1): err= 0: pid=2983750: Mon Jul 15 10:29:18 2024 00:21:43.152 read: IOPS=189, BW=189MiB/s (198MB/s)(1900MiB/10040msec) 00:21:43.152 slat (usec): min=23, max=148170, avg=5261.02, stdev=11512.95 00:21:43.152 clat (msec): min=36, max=2412, avg=637.70, stdev=652.28 00:21:43.152 lat (msec): min=41, max=2416, avg=642.96, stdev=657.07 00:21:43.152 clat percentiles (msec): 00:21:43.152 | 1.00th=[ 87], 5.00th=[ 203], 10.00th=[ 207], 20.00th=[ 209], 00:21:43.152 | 30.00th=[ 215], 40.00th=[ 309], 50.00th=[ 317], 60.00th=[ 355], 00:21:43.152 | 70.00th=[ 523], 80.00th=[ 1003], 90.00th=[ 1938], 95.00th=[ 2198], 00:21:43.152 | 99.00th=[ 2299], 99.50th=[ 2333], 99.90th=[ 2400], 99.95th=[ 2400], 00:21:43.152 | 99.99th=[ 2400] 00:21:43.152 bw ( KiB/s): min=38912, max=622592, per=4.40%, avg=201680.72, stdev=197808.26, samples=18 00:21:43.152 iops : min= 38, max= 608, avg=196.89, stdev=193.20, samples=18 00:21:43.152 lat (msec) : 50=0.21%, 100=1.26%, 250=32.89%, 500=33.00%, 750=6.63% 00:21:43.152 lat (msec) : 1000=5.58%, 2000=11.79%, >=2000=8.63% 00:21:43.152 cpu : usr=0.03%, sys=2.58%, ctx=2631, majf=0, minf=32769 00:21:43.152 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:21:43.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.152 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.152 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.152 job5: (groupid=0, jobs=1): err= 0: pid=2983751: Mon Jul 15 10:29:18 2024 00:21:43.152 read: IOPS=41, BW=41.3MiB/s (43.3MB/s)(417MiB/10102msec) 00:21:43.152 slat (usec): min=24, max=228073, avg=23977.25, stdev=31220.88 00:21:43.152 clat (msec): min=101, max=4166, avg=2827.78, stdev=941.17 00:21:43.152 lat (msec): min=107, max=4200, avg=2851.75, stdev=941.07 00:21:43.152 clat percentiles (msec): 00:21:43.152 | 1.00th=[ 313], 5.00th=[ 827], 10.00th=[ 1603], 20.00th=[ 2198], 00:21:43.152 | 30.00th=[ 2500], 40.00th=[ 2601], 50.00th=[ 2668], 60.00th=[ 3138], 00:21:43.152 | 70.00th=[ 3540], 80.00th=[ 3775], 90.00th=[ 4010], 95.00th=[ 4077], 00:21:43.152 | 99.00th=[ 4144], 99.50th=[ 4144], 99.90th=[ 4178], 99.95th=[ 4178], 00:21:43.152 | 99.99th=[ 4178] 00:21:43.152 bw ( KiB/s): min=16384, max=94208, per=0.81%, avg=37112.06, stdev=20327.48, samples=16 00:21:43.152 iops : min= 16, max= 92, avg=36.13, stdev=19.89, samples=16 00:21:43.152 lat (msec) : 250=0.96%, 500=1.44%, 750=1.92%, 1000=0.96%, 2000=9.35% 00:21:43.152 lat (msec) : >=2000=85.37% 00:21:43.152 cpu : usr=0.03%, sys=1.22%, ctx=1596, majf=0, minf=32769 00:21:43.152 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.8%, 32=7.7%, >=64=84.9% 00:21:43.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.152 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:21:43.152 issued rwts: total=417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.152 job5: (groupid=0, jobs=1): err= 0: pid=2983752: Mon Jul 15 10:29:18 2024 00:21:43.152 read: IOPS=127, BW=127MiB/s (134MB/s)(1293MiB/10142msec) 00:21:43.152 slat (usec): min=34, max=96363, avg=7758.71, stdev=10507.30 00:21:43.152 clat (msec): min=100, max=2008, avg=930.34, stdev=457.63 00:21:43.152 lat (msec): min=145, max=2026, avg=938.10, stdev=459.48 00:21:43.152 clat percentiles (msec): 00:21:43.152 | 1.00th=[ 368], 5.00th=[ 430], 10.00th=[ 510], 20.00th=[ 550], 00:21:43.152 | 30.00th=[ 567], 40.00th=[ 584], 50.00th=[ 844], 60.00th=[ 961], 00:21:43.152 | 70.00th=[ 1183], 80.00th=[ 1267], 90.00th=[ 1703], 95.00th=[ 1905], 00:21:43.152 | 99.00th=[ 1989], 99.50th=[ 2005], 99.90th=[ 2005], 99.95th=[ 2005], 00:21:43.152 | 99.99th=[ 2005] 00:21:43.152 bw ( KiB/s): min=30658, max=249856, per=3.06%, avg=140319.00, stdev=73646.11, samples=17 00:21:43.152 iops : min= 29, max= 244, avg=136.88, stdev=72.05, samples=17 00:21:43.152 lat (msec) : 250=0.54%, 500=8.97%, 750=37.66%, 1000=15.08%, 2000=37.28% 00:21:43.152 lat (msec) : >=2000=0.46% 00:21:43.152 cpu : usr=0.14%, sys=2.31%, ctx=2106, majf=0, minf=32769 00:21:43.152 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:21:43.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.152 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.152 issued rwts: total=1293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.152 job5: (groupid=0, jobs=1): err= 0: pid=2983753: Mon Jul 15 10:29:18 2024 00:21:43.152 read: IOPS=67, BW=67.4MiB/s (70.7MB/s)(678MiB/10055msec) 00:21:43.152 slat (usec): min=36, max=156553, avg=14772.02, stdev=19735.47 00:21:43.152 clat (msec): min=35, max=3831, avg=1632.91, stdev=925.64 00:21:43.152 lat (msec): min=55, max=3854, avg=1647.68, stdev=928.98 00:21:43.152 clat percentiles (msec): 00:21:43.152 | 1.00th=[ 112], 5.00th=[ 667], 10.00th=[ 818], 20.00th=[ 860], 00:21:43.152 | 30.00th=[ 1003], 40.00th=[ 1099], 50.00th=[ 1167], 60.00th=[ 1603], 00:21:43.152 | 70.00th=[ 2072], 80.00th=[ 2601], 90.00th=[ 3104], 95.00th=[ 3473], 00:21:43.152 | 99.00th=[ 3809], 99.50th=[ 3809], 99.90th=[ 3842], 99.95th=[ 3842], 00:21:43.152 | 99.99th=[ 3842] 00:21:43.152 bw ( KiB/s): min=16384, max=161792, per=1.64%, avg=75050.73, stdev=48189.12, samples=15 00:21:43.152 iops : min= 16, max= 158, avg=73.13, stdev=46.97, samples=15 00:21:43.152 lat (msec) : 50=0.15%, 100=0.74%, 250=1.18%, 500=1.47%, 750=2.65% 00:21:43.152 lat (msec) : 1000=23.45%, 2000=39.09%, >=2000=31.27% 00:21:43.152 cpu : usr=0.07%, sys=1.61%, ctx=1978, majf=0, minf=32769 00:21:43.152 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:21:43.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.152 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:21:43.152 issued rwts: total=678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.152 job5: (groupid=0, jobs=1): err= 0: pid=2983754: Mon Jul 15 10:29:18 2024 00:21:43.152 read: IOPS=251, BW=252MiB/s (264MB/s)(2522MiB/10011msec) 00:21:43.152 slat (usec): min=23, max=2130.2k, avg=3960.87, stdev=52434.71 00:21:43.152 clat (msec): min=9, max=2599, avg=400.59, stdev=506.02 00:21:43.153 lat (msec): min=10, max=2600, avg=404.55, stdev=509.69 00:21:43.153 clat percentiles (msec): 00:21:43.153 | 1.00th=[ 29], 5.00th=[ 171], 10.00th=[ 211], 20.00th=[ 213], 00:21:43.153 | 30.00th=[ 215], 40.00th=[ 218], 50.00th=[ 241], 60.00th=[ 313], 00:21:43.153 | 70.00th=[ 317], 80.00th=[ 351], 90.00th=[ 542], 95.00th=[ 2400], 00:21:43.153 | 99.00th=[ 2500], 99.50th=[ 2500], 99.90th=[ 2567], 99.95th=[ 2567], 00:21:43.153 | 99.99th=[ 2601] 00:21:43.153 bw ( KiB/s): min=202752, max=612352, per=9.50%, avg=435463.60, stdev=137707.63, samples=10 00:21:43.153 iops : min= 198, max= 598, avg=425.20, stdev=134.52, samples=10 00:21:43.153 lat (msec) : 10=0.04%, 20=0.52%, 50=1.35%, 100=1.23%, 250=47.94% 00:21:43.153 lat (msec) : 500=36.16%, 750=6.74%, 1000=0.67%, >=2000=5.35% 00:21:43.153 cpu : usr=0.09%, sys=2.24%, ctx=2526, majf=0, minf=32769 00:21:43.153 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:21:43.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.153 issued rwts: total=2522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.153 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.153 00:21:43.153 Run status group 0 (all jobs): 00:21:43.153 READ: bw=4477MiB/s (4694MB/s), 2593KiB/s-252MiB/s (2655kB/s-264MB/s), io=46.0GiB (49.3GB), run=10010-10511msec 00:21:43.153 00:21:43.153 Disk stats (read/write): 00:21:43.153 nvme0n1: ios=32190/0, merge=0/0, ticks=5154759/0, in_queue=5154759, util=97.77% 00:21:43.153 nvme1n1: ios=54675/0, merge=0/0, ticks=6303658/0, in_queue=6303658, util=98.41% 00:21:43.153 nvme2n1: ios=76036/0, merge=0/0, ticks=5424013/0, in_queue=5424013, util=98.59% 00:21:43.153 nvme3n1: ios=35283/0, merge=0/0, ticks=5363185/0, in_queue=5363185, util=98.76% 00:21:43.153 nvme4n1: ios=74132/0, merge=0/0, ticks=5233841/0, in_queue=5233841, util=98.72% 00:21:43.153 nvme5n1: ios=100543/0, merge=0/0, ticks=7138206/0, in_queue=7138206, util=98.97% 00:21:43.153 10:29:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:21:43.153 10:29:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:21:43.153 10:29:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:43.153 10:29:19 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:21:43.414 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:43.414 10:29:20 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:44.800 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:44.800 10:29:21 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:46.207 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:46.207 10:29:22 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:21:46.207 10:29:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:46.207 10:29:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:46.207 10:29:22 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:21:46.207 10:29:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:46.207 10:29:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:21:46.207 10:29:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:46.207 10:29:23 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:46.207 10:29:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.207 10:29:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:46.207 10:29:23 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.207 10:29:23 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:46.207 10:29:23 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:47.149 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:47.149 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:21:47.149 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:47.149 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:47.149 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:21:47.410 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:47.410 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:21:47.410 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:47.410 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:47.410 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.410 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:47.410 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.410 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:47.410 10:29:24 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:48.794 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:21:48.794 10:29:25 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:50.179 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:50.179 rmmod nvme_rdma 00:21:50.179 rmmod nvme_fabrics 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 2981417 ']' 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 2981417 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@948 -- # '[' -z 2981417 ']' 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # kill -0 2981417 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # uname 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2981417 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2981417' 00:21:50.179 killing process with pid 2981417 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@967 -- # kill 2981417 00:21:50.179 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # wait 2981417 00:21:50.440 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.440 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:50.440 00:21:50.440 real 0m37.890s 00:21:50.440 user 2m18.476s 00:21:50.440 sys 0m17.227s 00:21:50.440 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.440 10:29:27 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:21:50.440 ************************************ 00:21:50.440 END TEST nvmf_srq_overwhelm 00:21:50.440 ************************************ 00:21:50.440 10:29:27 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:21:50.440 10:29:27 nvmf_rdma -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:21:50.440 10:29:27 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:50.440 10:29:27 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.440 10:29:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:50.440 ************************************ 00:21:50.440 START TEST nvmf_shutdown 00:21:50.440 ************************************ 00:21:50.440 10:29:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:21:50.701 * Looking for test storage... 00:21:50.701 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:50.701 10:29:27 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.701 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:50.701 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.701 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.701 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.701 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.701 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.701 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.701 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.701 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:50.702 ************************************ 00:21:50.702 START TEST nvmf_shutdown_tc1 00:21:50.702 ************************************ 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.702 10:29:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:21:58.867 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:21:58.867 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:21:58.867 Found net devices under 0000:98:00.0: mlx_0_0 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:21:58.867 Found net devices under 0000:98:00.1: mlx_0_1 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:58.867 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:58.868 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:58.868 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:21:58.868 altname enp152s0f0np0 00:21:58.868 altname ens817f0np0 00:21:58.868 inet 192.168.100.8/24 scope global mlx_0_0 00:21:58.868 valid_lft forever preferred_lft forever 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:58.868 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:58.868 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:21:58.868 altname enp152s0f1np1 00:21:58.868 altname ens817f1np1 00:21:58.868 inet 192.168.100.9/24 scope global mlx_0_1 00:21:58.868 valid_lft forever preferred_lft forever 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:58.868 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:58.869 192.168.100.9' 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:58.869 192.168.100.9' 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:58.869 192.168.100.9' 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2991622 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2991622 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2991622 ']' 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.869 10:29:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:58.869 [2024-07-15 10:29:35.690648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:21:58.869 [2024-07-15 10:29:35.690698] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.869 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.869 [2024-07-15 10:29:35.774115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.869 [2024-07-15 10:29:35.847524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.869 [2024-07-15 10:29:35.847571] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.869 [2024-07-15 10:29:35.847579] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.869 [2024-07-15 10:29:35.847586] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.869 [2024-07-15 10:29:35.847592] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.869 [2024-07-15 10:29:35.847715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.869 [2024-07-15 10:29:35.847873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.869 [2024-07-15 10:29:35.848035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.869 [2024-07-15 10:29:35.848036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:59.439 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.439 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:59.439 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.439 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:59.439 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:59.439 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.439 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:59.439 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.439 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:59.439 [2024-07-15 10:29:36.546487] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11e26b0/0x11e6ba0) succeed. 00:21:59.439 [2024-07-15 10:29:36.561083] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11e3cf0/0x1228230) succeed. 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.700 10:29:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:59.700 Malloc1 00:21:59.700 [2024-07-15 10:29:36.782104] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:59.700 Malloc2 00:21:59.700 Malloc3 00:21:59.700 Malloc4 00:21:59.961 Malloc5 00:21:59.961 Malloc6 00:21:59.961 Malloc7 00:21:59.961 Malloc8 00:21:59.961 Malloc9 00:21:59.961 Malloc10 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2992005 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2992005 /var/tmp/bdevperf.sock 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2992005 ']' 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.222 { 00:22:00.222 "params": { 00:22:00.222 "name": "Nvme$subsystem", 00:22:00.222 "trtype": "$TEST_TRANSPORT", 00:22:00.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.222 "adrfam": "ipv4", 00:22:00.222 "trsvcid": "$NVMF_PORT", 00:22:00.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.222 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.222 "hdgst": ${hdgst:-false}, 00:22:00.222 "ddgst": ${ddgst:-false} 00:22:00.222 }, 00:22:00.222 "method": "bdev_nvme_attach_controller" 00:22:00.222 } 00:22:00.222 EOF 00:22:00.222 )") 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.222 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.222 { 00:22:00.222 "params": { 00:22:00.222 "name": "Nvme$subsystem", 00:22:00.222 "trtype": "$TEST_TRANSPORT", 00:22:00.222 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.222 "adrfam": "ipv4", 00:22:00.222 "trsvcid": "$NVMF_PORT", 00:22:00.222 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.223 "hdgst": ${hdgst:-false}, 00:22:00.223 "ddgst": ${ddgst:-false} 00:22:00.223 }, 00:22:00.223 "method": "bdev_nvme_attach_controller" 00:22:00.223 } 00:22:00.223 EOF 00:22:00.223 )") 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.223 { 00:22:00.223 "params": { 00:22:00.223 "name": "Nvme$subsystem", 00:22:00.223 "trtype": "$TEST_TRANSPORT", 00:22:00.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.223 "adrfam": "ipv4", 00:22:00.223 "trsvcid": "$NVMF_PORT", 00:22:00.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.223 "hdgst": ${hdgst:-false}, 00:22:00.223 "ddgst": ${ddgst:-false} 00:22:00.223 }, 00:22:00.223 "method": "bdev_nvme_attach_controller" 00:22:00.223 } 00:22:00.223 EOF 00:22:00.223 )") 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.223 { 00:22:00.223 "params": { 00:22:00.223 "name": "Nvme$subsystem", 00:22:00.223 "trtype": "$TEST_TRANSPORT", 00:22:00.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.223 "adrfam": "ipv4", 00:22:00.223 "trsvcid": "$NVMF_PORT", 00:22:00.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.223 "hdgst": ${hdgst:-false}, 00:22:00.223 "ddgst": ${ddgst:-false} 00:22:00.223 }, 00:22:00.223 "method": "bdev_nvme_attach_controller" 00:22:00.223 } 00:22:00.223 EOF 00:22:00.223 )") 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.223 { 00:22:00.223 "params": { 00:22:00.223 "name": "Nvme$subsystem", 00:22:00.223 "trtype": "$TEST_TRANSPORT", 00:22:00.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.223 "adrfam": "ipv4", 00:22:00.223 "trsvcid": "$NVMF_PORT", 00:22:00.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.223 "hdgst": ${hdgst:-false}, 00:22:00.223 "ddgst": ${ddgst:-false} 00:22:00.223 }, 00:22:00.223 "method": "bdev_nvme_attach_controller" 00:22:00.223 } 00:22:00.223 EOF 00:22:00.223 )") 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.223 { 00:22:00.223 "params": { 00:22:00.223 "name": "Nvme$subsystem", 00:22:00.223 "trtype": "$TEST_TRANSPORT", 00:22:00.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.223 "adrfam": "ipv4", 00:22:00.223 "trsvcid": "$NVMF_PORT", 00:22:00.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.223 "hdgst": ${hdgst:-false}, 00:22:00.223 "ddgst": ${ddgst:-false} 00:22:00.223 }, 00:22:00.223 "method": "bdev_nvme_attach_controller" 00:22:00.223 } 00:22:00.223 EOF 00:22:00.223 )") 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.223 [2024-07-15 10:29:37.247600] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:00.223 [2024-07-15 10:29:37.247653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.223 { 00:22:00.223 "params": { 00:22:00.223 "name": "Nvme$subsystem", 00:22:00.223 "trtype": "$TEST_TRANSPORT", 00:22:00.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.223 "adrfam": "ipv4", 00:22:00.223 "trsvcid": "$NVMF_PORT", 00:22:00.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.223 "hdgst": ${hdgst:-false}, 00:22:00.223 "ddgst": ${ddgst:-false} 00:22:00.223 }, 00:22:00.223 "method": "bdev_nvme_attach_controller" 00:22:00.223 } 00:22:00.223 EOF 00:22:00.223 )") 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.223 { 00:22:00.223 "params": { 00:22:00.223 "name": "Nvme$subsystem", 00:22:00.223 "trtype": "$TEST_TRANSPORT", 00:22:00.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.223 "adrfam": "ipv4", 00:22:00.223 "trsvcid": "$NVMF_PORT", 00:22:00.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.223 "hdgst": ${hdgst:-false}, 00:22:00.223 "ddgst": ${ddgst:-false} 00:22:00.223 }, 00:22:00.223 "method": "bdev_nvme_attach_controller" 00:22:00.223 } 00:22:00.223 EOF 00:22:00.223 )") 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.223 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.223 { 00:22:00.223 "params": { 00:22:00.223 "name": "Nvme$subsystem", 00:22:00.223 "trtype": "$TEST_TRANSPORT", 00:22:00.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.223 "adrfam": "ipv4", 00:22:00.223 "trsvcid": "$NVMF_PORT", 00:22:00.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.223 "hdgst": ${hdgst:-false}, 00:22:00.223 "ddgst": ${ddgst:-false} 00:22:00.223 }, 00:22:00.223 "method": "bdev_nvme_attach_controller" 00:22:00.223 } 00:22:00.223 EOF 00:22:00.223 )") 00:22:00.224 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.224 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:00.224 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:00.224 { 00:22:00.224 "params": { 00:22:00.224 "name": "Nvme$subsystem", 00:22:00.224 "trtype": "$TEST_TRANSPORT", 00:22:00.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:00.224 "adrfam": "ipv4", 00:22:00.224 "trsvcid": "$NVMF_PORT", 00:22:00.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:00.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:00.224 "hdgst": ${hdgst:-false}, 00:22:00.224 "ddgst": ${ddgst:-false} 00:22:00.224 }, 00:22:00.224 "method": "bdev_nvme_attach_controller" 00:22:00.224 } 00:22:00.224 EOF 00:22:00.224 )") 00:22:00.224 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:00.224 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.224 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:00.224 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:00.224 10:29:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:00.224 "params": { 00:22:00.224 "name": "Nvme1", 00:22:00.224 "trtype": "rdma", 00:22:00.224 "traddr": "192.168.100.8", 00:22:00.224 "adrfam": "ipv4", 00:22:00.224 "trsvcid": "4420", 00:22:00.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.224 "hdgst": false, 00:22:00.224 "ddgst": false 00:22:00.224 }, 00:22:00.224 "method": "bdev_nvme_attach_controller" 00:22:00.224 },{ 00:22:00.224 "params": { 00:22:00.224 "name": "Nvme2", 00:22:00.224 "trtype": "rdma", 00:22:00.224 "traddr": "192.168.100.8", 00:22:00.224 "adrfam": "ipv4", 00:22:00.224 "trsvcid": "4420", 00:22:00.224 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:00.224 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:00.224 "hdgst": false, 00:22:00.224 "ddgst": false 00:22:00.224 }, 00:22:00.224 "method": "bdev_nvme_attach_controller" 00:22:00.224 },{ 00:22:00.224 "params": { 00:22:00.224 "name": "Nvme3", 00:22:00.224 "trtype": "rdma", 00:22:00.224 "traddr": "192.168.100.8", 00:22:00.224 "adrfam": "ipv4", 00:22:00.224 "trsvcid": "4420", 00:22:00.224 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:00.224 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:00.224 "hdgst": false, 00:22:00.224 "ddgst": false 00:22:00.224 }, 00:22:00.224 "method": "bdev_nvme_attach_controller" 00:22:00.224 },{ 00:22:00.224 "params": { 00:22:00.224 "name": "Nvme4", 00:22:00.224 "trtype": "rdma", 00:22:00.224 "traddr": "192.168.100.8", 00:22:00.224 "adrfam": "ipv4", 00:22:00.224 "trsvcid": "4420", 00:22:00.224 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:00.224 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:00.224 "hdgst": false, 00:22:00.224 "ddgst": false 00:22:00.224 }, 00:22:00.224 "method": "bdev_nvme_attach_controller" 00:22:00.224 },{ 00:22:00.224 "params": { 00:22:00.224 "name": "Nvme5", 00:22:00.224 "trtype": "rdma", 00:22:00.224 "traddr": "192.168.100.8", 00:22:00.224 "adrfam": "ipv4", 00:22:00.224 "trsvcid": "4420", 00:22:00.224 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:00.224 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:00.224 "hdgst": false, 00:22:00.224 "ddgst": false 00:22:00.224 }, 00:22:00.224 "method": "bdev_nvme_attach_controller" 00:22:00.224 },{ 00:22:00.224 "params": { 00:22:00.224 "name": "Nvme6", 00:22:00.224 "trtype": "rdma", 00:22:00.224 "traddr": "192.168.100.8", 00:22:00.224 "adrfam": "ipv4", 00:22:00.224 "trsvcid": "4420", 00:22:00.224 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:00.224 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:00.224 "hdgst": false, 00:22:00.224 "ddgst": false 00:22:00.224 }, 00:22:00.224 "method": "bdev_nvme_attach_controller" 00:22:00.224 },{ 00:22:00.224 "params": { 00:22:00.224 "name": "Nvme7", 00:22:00.224 "trtype": "rdma", 00:22:00.224 "traddr": "192.168.100.8", 00:22:00.224 "adrfam": "ipv4", 00:22:00.224 "trsvcid": "4420", 00:22:00.224 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:00.224 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:00.224 "hdgst": false, 00:22:00.224 "ddgst": false 00:22:00.224 }, 00:22:00.224 "method": "bdev_nvme_attach_controller" 00:22:00.224 },{ 00:22:00.224 "params": { 00:22:00.224 "name": "Nvme8", 00:22:00.224 "trtype": "rdma", 00:22:00.224 "traddr": "192.168.100.8", 00:22:00.224 "adrfam": "ipv4", 00:22:00.224 "trsvcid": "4420", 00:22:00.224 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:00.224 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:00.224 "hdgst": false, 00:22:00.224 "ddgst": false 00:22:00.224 }, 00:22:00.224 "method": "bdev_nvme_attach_controller" 00:22:00.224 },{ 00:22:00.224 "params": { 00:22:00.224 "name": "Nvme9", 00:22:00.224 "trtype": "rdma", 00:22:00.224 "traddr": "192.168.100.8", 00:22:00.224 "adrfam": "ipv4", 00:22:00.224 "trsvcid": "4420", 00:22:00.224 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:00.224 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:00.224 "hdgst": false, 00:22:00.224 "ddgst": false 00:22:00.224 }, 00:22:00.224 "method": "bdev_nvme_attach_controller" 00:22:00.224 },{ 00:22:00.224 "params": { 00:22:00.224 "name": "Nvme10", 00:22:00.224 "trtype": "rdma", 00:22:00.224 "traddr": "192.168.100.8", 00:22:00.224 "adrfam": "ipv4", 00:22:00.224 "trsvcid": "4420", 00:22:00.224 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:00.224 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:00.224 "hdgst": false, 00:22:00.224 "ddgst": false 00:22:00.224 }, 00:22:00.224 "method": "bdev_nvme_attach_controller" 00:22:00.224 }' 00:22:00.224 [2024-07-15 10:29:37.315043] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.224 [2024-07-15 10:29:37.380012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.169 10:29:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.169 10:29:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:22:01.169 10:29:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:01.169 10:29:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.169 10:29:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:01.169 10:29:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.169 10:29:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2992005 00:22:01.169 10:29:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:01.169 10:29:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:02.112 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2992005 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2991622 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.112 { 00:22:02.112 "params": { 00:22:02.112 "name": "Nvme$subsystem", 00:22:02.112 "trtype": "$TEST_TRANSPORT", 00:22:02.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.112 "adrfam": "ipv4", 00:22:02.112 "trsvcid": "$NVMF_PORT", 00:22:02.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.112 "hdgst": ${hdgst:-false}, 00:22:02.112 "ddgst": ${ddgst:-false} 00:22:02.112 }, 00:22:02.112 "method": "bdev_nvme_attach_controller" 00:22:02.112 } 00:22:02.112 EOF 00:22:02.112 )") 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.112 { 00:22:02.112 "params": { 00:22:02.112 "name": "Nvme$subsystem", 00:22:02.112 "trtype": "$TEST_TRANSPORT", 00:22:02.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.112 "adrfam": "ipv4", 00:22:02.112 "trsvcid": "$NVMF_PORT", 00:22:02.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.112 "hdgst": ${hdgst:-false}, 00:22:02.112 "ddgst": ${ddgst:-false} 00:22:02.112 }, 00:22:02.112 "method": "bdev_nvme_attach_controller" 00:22:02.112 } 00:22:02.112 EOF 00:22:02.112 )") 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.112 { 00:22:02.112 "params": { 00:22:02.112 "name": "Nvme$subsystem", 00:22:02.112 "trtype": "$TEST_TRANSPORT", 00:22:02.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.112 "adrfam": "ipv4", 00:22:02.112 "trsvcid": "$NVMF_PORT", 00:22:02.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.112 "hdgst": ${hdgst:-false}, 00:22:02.112 "ddgst": ${ddgst:-false} 00:22:02.112 }, 00:22:02.112 "method": "bdev_nvme_attach_controller" 00:22:02.112 } 00:22:02.112 EOF 00:22:02.112 )") 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.112 { 00:22:02.112 "params": { 00:22:02.112 "name": "Nvme$subsystem", 00:22:02.112 "trtype": "$TEST_TRANSPORT", 00:22:02.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.112 "adrfam": "ipv4", 00:22:02.112 "trsvcid": "$NVMF_PORT", 00:22:02.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.112 "hdgst": ${hdgst:-false}, 00:22:02.112 "ddgst": ${ddgst:-false} 00:22:02.112 }, 00:22:02.112 "method": "bdev_nvme_attach_controller" 00:22:02.112 } 00:22:02.112 EOF 00:22:02.112 )") 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.112 { 00:22:02.112 "params": { 00:22:02.112 "name": "Nvme$subsystem", 00:22:02.112 "trtype": "$TEST_TRANSPORT", 00:22:02.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.112 "adrfam": "ipv4", 00:22:02.112 "trsvcid": "$NVMF_PORT", 00:22:02.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.112 "hdgst": ${hdgst:-false}, 00:22:02.112 "ddgst": ${ddgst:-false} 00:22:02.112 }, 00:22:02.112 "method": "bdev_nvme_attach_controller" 00:22:02.112 } 00:22:02.112 EOF 00:22:02.112 )") 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.112 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.112 { 00:22:02.112 "params": { 00:22:02.112 "name": "Nvme$subsystem", 00:22:02.112 "trtype": "$TEST_TRANSPORT", 00:22:02.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.112 "adrfam": "ipv4", 00:22:02.112 "trsvcid": "$NVMF_PORT", 00:22:02.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.112 "hdgst": ${hdgst:-false}, 00:22:02.112 "ddgst": ${ddgst:-false} 00:22:02.112 }, 00:22:02.112 "method": "bdev_nvme_attach_controller" 00:22:02.112 } 00:22:02.112 EOF 00:22:02.112 )") 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:02.373 [2024-07-15 10:29:39.312506] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:02.373 [2024-07-15 10:29:39.312559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2992377 ] 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.373 { 00:22:02.373 "params": { 00:22:02.373 "name": "Nvme$subsystem", 00:22:02.373 "trtype": "$TEST_TRANSPORT", 00:22:02.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.373 "adrfam": "ipv4", 00:22:02.373 "trsvcid": "$NVMF_PORT", 00:22:02.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.373 "hdgst": ${hdgst:-false}, 00:22:02.373 "ddgst": ${ddgst:-false} 00:22:02.373 }, 00:22:02.373 "method": "bdev_nvme_attach_controller" 00:22:02.373 } 00:22:02.373 EOF 00:22:02.373 )") 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.373 { 00:22:02.373 "params": { 00:22:02.373 "name": "Nvme$subsystem", 00:22:02.373 "trtype": "$TEST_TRANSPORT", 00:22:02.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.373 "adrfam": "ipv4", 00:22:02.373 "trsvcid": "$NVMF_PORT", 00:22:02.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.373 "hdgst": ${hdgst:-false}, 00:22:02.373 "ddgst": ${ddgst:-false} 00:22:02.373 }, 00:22:02.373 "method": "bdev_nvme_attach_controller" 00:22:02.373 } 00:22:02.373 EOF 00:22:02.373 )") 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.373 { 00:22:02.373 "params": { 00:22:02.373 "name": "Nvme$subsystem", 00:22:02.373 "trtype": "$TEST_TRANSPORT", 00:22:02.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.373 "adrfam": "ipv4", 00:22:02.373 "trsvcid": "$NVMF_PORT", 00:22:02.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.373 "hdgst": ${hdgst:-false}, 00:22:02.373 "ddgst": ${ddgst:-false} 00:22:02.373 }, 00:22:02.373 "method": "bdev_nvme_attach_controller" 00:22:02.373 } 00:22:02.373 EOF 00:22:02.373 )") 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.373 { 00:22:02.373 "params": { 00:22:02.373 "name": "Nvme$subsystem", 00:22:02.373 "trtype": "$TEST_TRANSPORT", 00:22:02.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.373 "adrfam": "ipv4", 00:22:02.373 "trsvcid": "$NVMF_PORT", 00:22:02.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.373 "hdgst": ${hdgst:-false}, 00:22:02.373 "ddgst": ${ddgst:-false} 00:22:02.373 }, 00:22:02.373 "method": "bdev_nvme_attach_controller" 00:22:02.373 } 00:22:02.373 EOF 00:22:02.373 )") 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:02.373 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:02.373 10:29:39 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:02.373 "params": { 00:22:02.373 "name": "Nvme1", 00:22:02.373 "trtype": "rdma", 00:22:02.373 "traddr": "192.168.100.8", 00:22:02.373 "adrfam": "ipv4", 00:22:02.373 "trsvcid": "4420", 00:22:02.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.373 "hdgst": false, 00:22:02.373 "ddgst": false 00:22:02.373 }, 00:22:02.373 "method": "bdev_nvme_attach_controller" 00:22:02.373 },{ 00:22:02.373 "params": { 00:22:02.373 "name": "Nvme2", 00:22:02.373 "trtype": "rdma", 00:22:02.373 "traddr": "192.168.100.8", 00:22:02.373 "adrfam": "ipv4", 00:22:02.373 "trsvcid": "4420", 00:22:02.373 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:02.373 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:02.373 "hdgst": false, 00:22:02.373 "ddgst": false 00:22:02.373 }, 00:22:02.373 "method": "bdev_nvme_attach_controller" 00:22:02.373 },{ 00:22:02.373 "params": { 00:22:02.374 "name": "Nvme3", 00:22:02.374 "trtype": "rdma", 00:22:02.374 "traddr": "192.168.100.8", 00:22:02.374 "adrfam": "ipv4", 00:22:02.374 "trsvcid": "4420", 00:22:02.374 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:02.374 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:02.374 "hdgst": false, 00:22:02.374 "ddgst": false 00:22:02.374 }, 00:22:02.374 "method": "bdev_nvme_attach_controller" 00:22:02.374 },{ 00:22:02.374 "params": { 00:22:02.374 "name": "Nvme4", 00:22:02.374 "trtype": "rdma", 00:22:02.374 "traddr": "192.168.100.8", 00:22:02.374 "adrfam": "ipv4", 00:22:02.374 "trsvcid": "4420", 00:22:02.374 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:02.374 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:02.374 "hdgst": false, 00:22:02.374 "ddgst": false 00:22:02.374 }, 00:22:02.374 "method": "bdev_nvme_attach_controller" 00:22:02.374 },{ 00:22:02.374 "params": { 00:22:02.374 "name": "Nvme5", 00:22:02.374 "trtype": "rdma", 00:22:02.374 "traddr": "192.168.100.8", 00:22:02.374 "adrfam": "ipv4", 00:22:02.374 "trsvcid": "4420", 00:22:02.374 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:02.374 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:02.374 "hdgst": false, 00:22:02.374 "ddgst": false 00:22:02.374 }, 00:22:02.374 "method": "bdev_nvme_attach_controller" 00:22:02.374 },{ 00:22:02.374 "params": { 00:22:02.374 "name": "Nvme6", 00:22:02.374 "trtype": "rdma", 00:22:02.374 "traddr": "192.168.100.8", 00:22:02.374 "adrfam": "ipv4", 00:22:02.374 "trsvcid": "4420", 00:22:02.374 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:02.374 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:02.374 "hdgst": false, 00:22:02.374 "ddgst": false 00:22:02.374 }, 00:22:02.374 "method": "bdev_nvme_attach_controller" 00:22:02.374 },{ 00:22:02.374 "params": { 00:22:02.374 "name": "Nvme7", 00:22:02.374 "trtype": "rdma", 00:22:02.374 "traddr": "192.168.100.8", 00:22:02.374 "adrfam": "ipv4", 00:22:02.374 "trsvcid": "4420", 00:22:02.374 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:02.374 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:02.374 "hdgst": false, 00:22:02.374 "ddgst": false 00:22:02.374 }, 00:22:02.374 "method": "bdev_nvme_attach_controller" 00:22:02.374 },{ 00:22:02.374 "params": { 00:22:02.374 "name": "Nvme8", 00:22:02.374 "trtype": "rdma", 00:22:02.374 "traddr": "192.168.100.8", 00:22:02.374 "adrfam": "ipv4", 00:22:02.374 "trsvcid": "4420", 00:22:02.374 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:02.374 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:02.374 "hdgst": false, 00:22:02.374 "ddgst": false 00:22:02.374 }, 00:22:02.374 "method": "bdev_nvme_attach_controller" 00:22:02.374 },{ 00:22:02.374 "params": { 00:22:02.374 "name": "Nvme9", 00:22:02.374 "trtype": "rdma", 00:22:02.374 "traddr": "192.168.100.8", 00:22:02.374 "adrfam": "ipv4", 00:22:02.374 "trsvcid": "4420", 00:22:02.374 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:02.374 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:02.374 "hdgst": false, 00:22:02.374 "ddgst": false 00:22:02.374 }, 00:22:02.374 "method": "bdev_nvme_attach_controller" 00:22:02.374 },{ 00:22:02.374 "params": { 00:22:02.374 "name": "Nvme10", 00:22:02.374 "trtype": "rdma", 00:22:02.374 "traddr": "192.168.100.8", 00:22:02.374 "adrfam": "ipv4", 00:22:02.374 "trsvcid": "4420", 00:22:02.374 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:02.374 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:02.374 "hdgst": false, 00:22:02.374 "ddgst": false 00:22:02.374 }, 00:22:02.374 "method": "bdev_nvme_attach_controller" 00:22:02.374 }' 00:22:02.374 [2024-07-15 10:29:39.380029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.374 [2024-07-15 10:29:39.443972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.318 Running I/O for 1 seconds... 00:22:04.705 00:22:04.705 Latency(us) 00:22:04.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.705 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.705 Verification LBA range: start 0x0 length 0x400 00:22:04.705 Nvme1n1 : 1.20 286.64 17.92 0.00 0.00 215373.62 21080.75 232434.35 00:22:04.705 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.705 Verification LBA range: start 0x0 length 0x400 00:22:04.705 Nvme2n1 : 1.20 268.98 16.81 0.00 0.00 222629.57 24357.55 215831.89 00:22:04.705 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.705 Verification LBA range: start 0x0 length 0x400 00:22:04.705 Nvme3n1 : 1.21 291.88 18.24 0.00 0.00 205474.60 29491.20 206219.95 00:22:04.705 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.705 Verification LBA range: start 0x0 length 0x400 00:22:04.705 Nvme4n1 : 1.21 278.34 17.40 0.00 0.00 209366.71 37355.52 192238.93 00:22:04.705 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.705 Verification LBA range: start 0x0 length 0x400 00:22:04.705 Nvme5n1 : 1.22 315.55 19.72 0.00 0.00 187658.03 7809.71 175636.48 00:22:04.705 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.705 Verification LBA range: start 0x0 length 0x400 00:22:04.705 Nvme6n1 : 1.22 315.16 19.70 0.00 0.00 184207.86 8355.84 165150.72 00:22:04.705 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.705 Verification LBA range: start 0x0 length 0x400 00:22:04.705 Nvme7n1 : 1.22 341.03 21.31 0.00 0.00 167087.43 3795.63 165150.72 00:22:04.705 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.705 Verification LBA range: start 0x0 length 0x400 00:22:04.705 Nvme8n1 : 1.22 322.71 20.17 0.00 0.00 172942.41 8847.36 158160.21 00:22:04.705 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.705 Verification LBA range: start 0x0 length 0x400 00:22:04.705 Nvme9n1 : 1.21 316.55 19.78 0.00 0.00 174113.42 10704.21 142431.57 00:22:04.705 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:04.705 Verification LBA range: start 0x0 length 0x400 00:22:04.705 Nvme10n1 : 1.22 263.37 16.46 0.00 0.00 205279.74 11741.87 237677.23 00:22:04.705 =================================================================================================================== 00:22:04.705 Total : 3000.22 187.51 0.00 0.00 192938.56 3795.63 237677.23 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:04.705 rmmod nvme_rdma 00:22:04.705 rmmod nvme_fabrics 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2991622 ']' 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2991622 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2991622 ']' 00:22:04.705 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2991622 00:22:04.966 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:22:04.966 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.966 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2991622 00:22:04.966 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:04.966 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:04.966 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2991622' 00:22:04.966 killing process with pid 2991622 00:22:04.966 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2991622 00:22:04.966 10:29:41 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2991622 00:22:05.227 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:05.227 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:05.227 00:22:05.227 real 0m14.536s 00:22:05.228 user 0m30.706s 00:22:05.228 sys 0m6.844s 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:05.228 ************************************ 00:22:05.228 END TEST nvmf_shutdown_tc1 00:22:05.228 ************************************ 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:05.228 ************************************ 00:22:05.228 START TEST nvmf_shutdown_tc2 00:22:05.228 ************************************ 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:05.228 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:05.228 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:05.228 Found net devices under 0000:98:00.0: mlx_0_0 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:05.228 Found net devices under 0000:98:00.1: mlx_0_1 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:05.228 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:05.491 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:05.491 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:05.491 altname enp152s0f0np0 00:22:05.491 altname ens817f0np0 00:22:05.491 inet 192.168.100.8/24 scope global mlx_0_0 00:22:05.491 valid_lft forever preferred_lft forever 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:05.491 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:05.491 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:05.491 altname enp152s0f1np1 00:22:05.491 altname ens817f1np1 00:22:05.491 inet 192.168.100.9/24 scope global mlx_0_1 00:22:05.491 valid_lft forever preferred_lft forever 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:05.491 192.168.100.9' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:05.491 192.168.100.9' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:05.491 192.168.100.9' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2993148 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2993148 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2993148 ']' 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:05.491 10:29:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:05.491 [2024-07-15 10:29:42.649712] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:05.491 [2024-07-15 10:29:42.649771] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.491 EAL: No free 2048 kB hugepages reported on node 1 00:22:05.751 [2024-07-15 10:29:42.734753] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.751 [2024-07-15 10:29:42.791801] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.751 [2024-07-15 10:29:42.791833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.751 [2024-07-15 10:29:42.791839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.751 [2024-07-15 10:29:42.791843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.751 [2024-07-15 10:29:42.791847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.751 [2024-07-15 10:29:42.791953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:05.751 [2024-07-15 10:29:42.792109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:05.751 [2024-07-15 10:29:42.792278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.751 [2024-07-15 10:29:42.792280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:06.320 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:06.320 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:06.320 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:06.320 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:06.320 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.320 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.320 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:06.320 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.320 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.320 [2024-07-15 10:29:43.495711] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22636b0/0x2267ba0) succeed. 00:22:06.320 [2024-07-15 10:29:43.505607] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2264cf0/0x22a9230) succeed. 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.580 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.581 10:29:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.581 Malloc1 00:22:06.581 [2024-07-15 10:29:43.696934] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:06.581 Malloc2 00:22:06.581 Malloc3 00:22:06.841 Malloc4 00:22:06.841 Malloc5 00:22:06.841 Malloc6 00:22:06.841 Malloc7 00:22:06.841 Malloc8 00:22:06.841 Malloc9 00:22:06.841 Malloc10 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2993520 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2993520 /var/tmp/bdevperf.sock 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2993520 ']' 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.102 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.102 { 00:22:07.102 "params": { 00:22:07.102 "name": "Nvme$subsystem", 00:22:07.102 "trtype": "$TEST_TRANSPORT", 00:22:07.102 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.102 "adrfam": "ipv4", 00:22:07.102 "trsvcid": "$NVMF_PORT", 00:22:07.102 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.102 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.102 "hdgst": ${hdgst:-false}, 00:22:07.102 "ddgst": ${ddgst:-false} 00:22:07.102 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 } 00:22:07.103 EOF 00:22:07.103 )") 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.103 { 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme$subsystem", 00:22:07.103 "trtype": "$TEST_TRANSPORT", 00:22:07.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "$NVMF_PORT", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.103 "hdgst": ${hdgst:-false}, 00:22:07.103 "ddgst": ${ddgst:-false} 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 } 00:22:07.103 EOF 00:22:07.103 )") 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.103 { 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme$subsystem", 00:22:07.103 "trtype": "$TEST_TRANSPORT", 00:22:07.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "$NVMF_PORT", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.103 "hdgst": ${hdgst:-false}, 00:22:07.103 "ddgst": ${ddgst:-false} 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 } 00:22:07.103 EOF 00:22:07.103 )") 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.103 { 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme$subsystem", 00:22:07.103 "trtype": "$TEST_TRANSPORT", 00:22:07.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "$NVMF_PORT", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.103 "hdgst": ${hdgst:-false}, 00:22:07.103 "ddgst": ${ddgst:-false} 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 } 00:22:07.103 EOF 00:22:07.103 )") 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.103 { 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme$subsystem", 00:22:07.103 "trtype": "$TEST_TRANSPORT", 00:22:07.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "$NVMF_PORT", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.103 "hdgst": ${hdgst:-false}, 00:22:07.103 "ddgst": ${ddgst:-false} 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 } 00:22:07.103 EOF 00:22:07.103 )") 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.103 { 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme$subsystem", 00:22:07.103 "trtype": "$TEST_TRANSPORT", 00:22:07.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "$NVMF_PORT", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.103 "hdgst": ${hdgst:-false}, 00:22:07.103 "ddgst": ${ddgst:-false} 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 } 00:22:07.103 EOF 00:22:07.103 )") 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.103 [2024-07-15 10:29:44.145721] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:07.103 [2024-07-15 10:29:44.145772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2993520 ] 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.103 { 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme$subsystem", 00:22:07.103 "trtype": "$TEST_TRANSPORT", 00:22:07.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "$NVMF_PORT", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.103 "hdgst": ${hdgst:-false}, 00:22:07.103 "ddgst": ${ddgst:-false} 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 } 00:22:07.103 EOF 00:22:07.103 )") 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.103 { 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme$subsystem", 00:22:07.103 "trtype": "$TEST_TRANSPORT", 00:22:07.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "$NVMF_PORT", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.103 "hdgst": ${hdgst:-false}, 00:22:07.103 "ddgst": ${ddgst:-false} 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 } 00:22:07.103 EOF 00:22:07.103 )") 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.103 { 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme$subsystem", 00:22:07.103 "trtype": "$TEST_TRANSPORT", 00:22:07.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "$NVMF_PORT", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.103 "hdgst": ${hdgst:-false}, 00:22:07.103 "ddgst": ${ddgst:-false} 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 } 00:22:07.103 EOF 00:22:07.103 )") 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:07.103 { 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme$subsystem", 00:22:07.103 "trtype": "$TEST_TRANSPORT", 00:22:07.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "$NVMF_PORT", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:07.103 "hdgst": ${hdgst:-false}, 00:22:07.103 "ddgst": ${ddgst:-false} 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 } 00:22:07.103 EOF 00:22:07.103 )") 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:07.103 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:07.103 10:29:44 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme1", 00:22:07.103 "trtype": "rdma", 00:22:07.103 "traddr": "192.168.100.8", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "4420", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:07.103 "hdgst": false, 00:22:07.103 "ddgst": false 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 },{ 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme2", 00:22:07.103 "trtype": "rdma", 00:22:07.103 "traddr": "192.168.100.8", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "4420", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:07.103 "hdgst": false, 00:22:07.103 "ddgst": false 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 },{ 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme3", 00:22:07.103 "trtype": "rdma", 00:22:07.103 "traddr": "192.168.100.8", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "4420", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:07.103 "hdgst": false, 00:22:07.103 "ddgst": false 00:22:07.103 }, 00:22:07.103 "method": "bdev_nvme_attach_controller" 00:22:07.103 },{ 00:22:07.103 "params": { 00:22:07.103 "name": "Nvme4", 00:22:07.103 "trtype": "rdma", 00:22:07.103 "traddr": "192.168.100.8", 00:22:07.103 "adrfam": "ipv4", 00:22:07.103 "trsvcid": "4420", 00:22:07.103 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:07.103 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:07.103 "hdgst": false, 00:22:07.104 "ddgst": false 00:22:07.104 }, 00:22:07.104 "method": "bdev_nvme_attach_controller" 00:22:07.104 },{ 00:22:07.104 "params": { 00:22:07.104 "name": "Nvme5", 00:22:07.104 "trtype": "rdma", 00:22:07.104 "traddr": "192.168.100.8", 00:22:07.104 "adrfam": "ipv4", 00:22:07.104 "trsvcid": "4420", 00:22:07.104 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:07.104 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:07.104 "hdgst": false, 00:22:07.104 "ddgst": false 00:22:07.104 }, 00:22:07.104 "method": "bdev_nvme_attach_controller" 00:22:07.104 },{ 00:22:07.104 "params": { 00:22:07.104 "name": "Nvme6", 00:22:07.104 "trtype": "rdma", 00:22:07.104 "traddr": "192.168.100.8", 00:22:07.104 "adrfam": "ipv4", 00:22:07.104 "trsvcid": "4420", 00:22:07.104 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:07.104 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:07.104 "hdgst": false, 00:22:07.104 "ddgst": false 00:22:07.104 }, 00:22:07.104 "method": "bdev_nvme_attach_controller" 00:22:07.104 },{ 00:22:07.104 "params": { 00:22:07.104 "name": "Nvme7", 00:22:07.104 "trtype": "rdma", 00:22:07.104 "traddr": "192.168.100.8", 00:22:07.104 "adrfam": "ipv4", 00:22:07.104 "trsvcid": "4420", 00:22:07.104 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:07.104 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:07.104 "hdgst": false, 00:22:07.104 "ddgst": false 00:22:07.104 }, 00:22:07.104 "method": "bdev_nvme_attach_controller" 00:22:07.104 },{ 00:22:07.104 "params": { 00:22:07.104 "name": "Nvme8", 00:22:07.104 "trtype": "rdma", 00:22:07.104 "traddr": "192.168.100.8", 00:22:07.104 "adrfam": "ipv4", 00:22:07.104 "trsvcid": "4420", 00:22:07.104 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:07.104 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:07.104 "hdgst": false, 00:22:07.104 "ddgst": false 00:22:07.104 }, 00:22:07.104 "method": "bdev_nvme_attach_controller" 00:22:07.104 },{ 00:22:07.104 "params": { 00:22:07.104 "name": "Nvme9", 00:22:07.104 "trtype": "rdma", 00:22:07.104 "traddr": "192.168.100.8", 00:22:07.104 "adrfam": "ipv4", 00:22:07.104 "trsvcid": "4420", 00:22:07.104 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:07.104 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:07.104 "hdgst": false, 00:22:07.104 "ddgst": false 00:22:07.104 }, 00:22:07.104 "method": "bdev_nvme_attach_controller" 00:22:07.104 },{ 00:22:07.104 "params": { 00:22:07.104 "name": "Nvme10", 00:22:07.104 "trtype": "rdma", 00:22:07.104 "traddr": "192.168.100.8", 00:22:07.104 "adrfam": "ipv4", 00:22:07.104 "trsvcid": "4420", 00:22:07.104 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:07.104 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:07.104 "hdgst": false, 00:22:07.104 "ddgst": false 00:22:07.104 }, 00:22:07.104 "method": "bdev_nvme_attach_controller" 00:22:07.104 }' 00:22:07.104 [2024-07-15 10:29:44.213170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.104 [2024-07-15 10:29:44.278353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.052 Running I/O for 10 seconds... 00:22:08.052 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.052 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:22:08.052 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:08.052 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.052 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.354 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.637 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.637 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:08.637 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:08.637 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:08.637 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:08.637 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:08.637 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:08.637 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:08.637 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.637 10:29:45 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=162 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 162 -ge 100 ']' 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2993520 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2993520 ']' 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2993520 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2993520 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2993520' 00:22:08.897 killing process with pid 2993520 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2993520 00:22:08.897 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2993520 00:22:09.158 Received shutdown signal, test time was about 0.995934 seconds 00:22:09.158 00:22:09.158 Latency(us) 00:22:09.158 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.158 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.158 Verification LBA range: start 0x0 length 0x400 00:22:09.158 Nvme1n1 : 0.98 292.53 18.28 0.00 0.00 214392.82 10158.08 246415.36 00:22:09.158 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.158 Verification LBA range: start 0x0 length 0x400 00:22:09.158 Nvme2n1 : 0.98 284.00 17.75 0.00 0.00 216080.79 10431.15 235929.60 00:22:09.158 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.158 Verification LBA range: start 0x0 length 0x400 00:22:09.158 Nvme3n1 : 0.98 325.27 20.33 0.00 0.00 185201.71 4560.21 177384.11 00:22:09.158 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.158 Verification LBA range: start 0x0 length 0x400 00:22:09.158 Nvme4n1 : 0.99 324.80 20.30 0.00 0.00 181666.99 11250.35 169519.79 00:22:09.158 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.158 Verification LBA range: start 0x0 length 0x400 00:22:09.158 Nvme5n1 : 0.99 324.17 20.26 0.00 0.00 179408.30 12178.77 152043.52 00:22:09.158 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.158 Verification LBA range: start 0x0 length 0x400 00:22:09.158 Nvme6n1 : 0.99 323.57 20.22 0.00 0.00 175634.86 13052.59 135441.07 00:22:09.158 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.158 Verification LBA range: start 0x0 length 0x400 00:22:09.158 Nvme7n1 : 0.99 322.96 20.19 0.00 0.00 172121.09 13981.01 118838.61 00:22:09.158 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.158 Verification LBA range: start 0x0 length 0x400 00:22:09.158 Nvme8n1 : 0.99 322.36 20.15 0.00 0.00 168590.68 14854.83 134567.25 00:22:09.158 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.158 Verification LBA range: start 0x0 length 0x400 00:22:09.158 Nvme9n1 : 0.99 321.74 20.11 0.00 0.00 165317.21 9721.17 151169.71 00:22:09.158 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:09.158 Verification LBA range: start 0x0 length 0x400 00:22:09.158 Nvme10n1 : 0.98 195.99 12.25 0.00 0.00 263940.27 9721.17 380982.61 00:22:09.158 =================================================================================================================== 00:22:09.158 Total : 3037.38 189.84 0.00 0.00 188604.03 4560.21 380982.61 00:22:09.419 10:29:46 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:10.357 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2993148 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:10.358 rmmod nvme_rdma 00:22:10.358 rmmod nvme_fabrics 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2993148 ']' 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2993148 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2993148 ']' 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2993148 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.358 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2993148 00:22:10.616 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:10.616 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:10.616 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2993148' 00:22:10.616 killing process with pid 2993148 00:22:10.616 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2993148 00:22:10.616 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2993148 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:10.877 00:22:10.877 real 0m5.516s 00:22:10.877 user 0m22.481s 00:22:10.877 sys 0m1.000s 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:10.877 ************************************ 00:22:10.877 END TEST nvmf_shutdown_tc2 00:22:10.877 ************************************ 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:10.877 ************************************ 00:22:10.877 START TEST nvmf_shutdown_tc3 00:22:10.877 ************************************ 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:10.877 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:10.878 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:10.878 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:10.878 Found net devices under 0000:98:00.0: mlx_0_0 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:10.878 Found net devices under 0000:98:00.1: mlx_0_1 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:10.878 10:29:47 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:10.878 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:10.878 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:10.878 altname enp152s0f0np0 00:22:10.878 altname ens817f0np0 00:22:10.878 inet 192.168.100.8/24 scope global mlx_0_0 00:22:10.878 valid_lft forever preferred_lft forever 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:10.878 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:11.138 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:11.138 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:11.138 altname enp152s0f1np1 00:22:11.138 altname ens817f1np1 00:22:11.138 inet 192.168.100.9/24 scope global mlx_0_1 00:22:11.138 valid_lft forever preferred_lft forever 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:11.138 192.168.100.9' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:11.138 192.168.100.9' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:11.138 192.168.100.9' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2994325 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2994325 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2994325 ']' 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.138 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.139 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.139 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.139 10:29:48 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.139 [2024-07-15 10:29:48.218099] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:11.139 [2024-07-15 10:29:48.218149] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.139 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.139 [2024-07-15 10:29:48.295172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.398 [2024-07-15 10:29:48.351287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.398 [2024-07-15 10:29:48.351317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.398 [2024-07-15 10:29:48.351323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.398 [2024-07-15 10:29:48.351327] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.398 [2024-07-15 10:29:48.351332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.398 [2024-07-15 10:29:48.351467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.398 [2024-07-15 10:29:48.351623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.398 [2024-07-15 10:29:48.351783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.398 [2024-07-15 10:29:48.351784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:11.991 [2024-07-15 10:29:49.072699] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x56a6b0/0x56eba0) succeed. 00:22:11.991 [2024-07-15 10:29:49.083845] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x56bcf0/0x5b0230) succeed. 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:11.991 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.250 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:12.250 Malloc1 00:22:12.250 [2024-07-15 10:29:49.278899] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:12.250 Malloc2 00:22:12.250 Malloc3 00:22:12.250 Malloc4 00:22:12.250 Malloc5 00:22:12.509 Malloc6 00:22:12.509 Malloc7 00:22:12.509 Malloc8 00:22:12.509 Malloc9 00:22:12.509 Malloc10 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2994715 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2994715 /var/tmp/bdevperf.sock 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2994715 ']' 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:12.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.509 { 00:22:12.509 "params": { 00:22:12.509 "name": "Nvme$subsystem", 00:22:12.509 "trtype": "$TEST_TRANSPORT", 00:22:12.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.509 "adrfam": "ipv4", 00:22:12.509 "trsvcid": "$NVMF_PORT", 00:22:12.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.509 "hdgst": ${hdgst:-false}, 00:22:12.509 "ddgst": ${ddgst:-false} 00:22:12.509 }, 00:22:12.509 "method": "bdev_nvme_attach_controller" 00:22:12.509 } 00:22:12.509 EOF 00:22:12.509 )") 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.509 { 00:22:12.509 "params": { 00:22:12.509 "name": "Nvme$subsystem", 00:22:12.509 "trtype": "$TEST_TRANSPORT", 00:22:12.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.509 "adrfam": "ipv4", 00:22:12.509 "trsvcid": "$NVMF_PORT", 00:22:12.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.509 "hdgst": ${hdgst:-false}, 00:22:12.509 "ddgst": ${ddgst:-false} 00:22:12.509 }, 00:22:12.509 "method": "bdev_nvme_attach_controller" 00:22:12.509 } 00:22:12.509 EOF 00:22:12.509 )") 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.509 { 00:22:12.509 "params": { 00:22:12.509 "name": "Nvme$subsystem", 00:22:12.509 "trtype": "$TEST_TRANSPORT", 00:22:12.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.509 "adrfam": "ipv4", 00:22:12.509 "trsvcid": "$NVMF_PORT", 00:22:12.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.509 "hdgst": ${hdgst:-false}, 00:22:12.509 "ddgst": ${ddgst:-false} 00:22:12.509 }, 00:22:12.509 "method": "bdev_nvme_attach_controller" 00:22:12.509 } 00:22:12.509 EOF 00:22:12.509 )") 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.509 { 00:22:12.509 "params": { 00:22:12.509 "name": "Nvme$subsystem", 00:22:12.509 "trtype": "$TEST_TRANSPORT", 00:22:12.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.509 "adrfam": "ipv4", 00:22:12.509 "trsvcid": "$NVMF_PORT", 00:22:12.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.509 "hdgst": ${hdgst:-false}, 00:22:12.509 "ddgst": ${ddgst:-false} 00:22:12.509 }, 00:22:12.509 "method": "bdev_nvme_attach_controller" 00:22:12.509 } 00:22:12.509 EOF 00:22:12.509 )") 00:22:12.509 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.769 { 00:22:12.769 "params": { 00:22:12.769 "name": "Nvme$subsystem", 00:22:12.769 "trtype": "$TEST_TRANSPORT", 00:22:12.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.769 "adrfam": "ipv4", 00:22:12.769 "trsvcid": "$NVMF_PORT", 00:22:12.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.769 "hdgst": ${hdgst:-false}, 00:22:12.769 "ddgst": ${ddgst:-false} 00:22:12.769 }, 00:22:12.769 "method": "bdev_nvme_attach_controller" 00:22:12.769 } 00:22:12.769 EOF 00:22:12.769 )") 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.769 { 00:22:12.769 "params": { 00:22:12.769 "name": "Nvme$subsystem", 00:22:12.769 "trtype": "$TEST_TRANSPORT", 00:22:12.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.769 "adrfam": "ipv4", 00:22:12.769 "trsvcid": "$NVMF_PORT", 00:22:12.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.769 "hdgst": ${hdgst:-false}, 00:22:12.769 "ddgst": ${ddgst:-false} 00:22:12.769 }, 00:22:12.769 "method": "bdev_nvme_attach_controller" 00:22:12.769 } 00:22:12.769 EOF 00:22:12.769 )") 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:12.769 [2024-07-15 10:29:49.722592] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:12.769 [2024-07-15 10:29:49.722646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2994715 ] 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.769 { 00:22:12.769 "params": { 00:22:12.769 "name": "Nvme$subsystem", 00:22:12.769 "trtype": "$TEST_TRANSPORT", 00:22:12.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.769 "adrfam": "ipv4", 00:22:12.769 "trsvcid": "$NVMF_PORT", 00:22:12.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.769 "hdgst": ${hdgst:-false}, 00:22:12.769 "ddgst": ${ddgst:-false} 00:22:12.769 }, 00:22:12.769 "method": "bdev_nvme_attach_controller" 00:22:12.769 } 00:22:12.769 EOF 00:22:12.769 )") 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.769 { 00:22:12.769 "params": { 00:22:12.769 "name": "Nvme$subsystem", 00:22:12.769 "trtype": "$TEST_TRANSPORT", 00:22:12.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.769 "adrfam": "ipv4", 00:22:12.769 "trsvcid": "$NVMF_PORT", 00:22:12.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.769 "hdgst": ${hdgst:-false}, 00:22:12.769 "ddgst": ${ddgst:-false} 00:22:12.769 }, 00:22:12.769 "method": "bdev_nvme_attach_controller" 00:22:12.769 } 00:22:12.769 EOF 00:22:12.769 )") 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.769 { 00:22:12.769 "params": { 00:22:12.769 "name": "Nvme$subsystem", 00:22:12.769 "trtype": "$TEST_TRANSPORT", 00:22:12.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.769 "adrfam": "ipv4", 00:22:12.769 "trsvcid": "$NVMF_PORT", 00:22:12.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.769 "hdgst": ${hdgst:-false}, 00:22:12.769 "ddgst": ${ddgst:-false} 00:22:12.769 }, 00:22:12.769 "method": "bdev_nvme_attach_controller" 00:22:12.769 } 00:22:12.769 EOF 00:22:12.769 )") 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:12.769 { 00:22:12.769 "params": { 00:22:12.769 "name": "Nvme$subsystem", 00:22:12.769 "trtype": "$TEST_TRANSPORT", 00:22:12.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:12.769 "adrfam": "ipv4", 00:22:12.769 "trsvcid": "$NVMF_PORT", 00:22:12.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:12.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:12.769 "hdgst": ${hdgst:-false}, 00:22:12.769 "ddgst": ${ddgst:-false} 00:22:12.769 }, 00:22:12.769 "method": "bdev_nvme_attach_controller" 00:22:12.769 } 00:22:12.769 EOF 00:22:12.769 )") 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:12.769 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.769 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:12.770 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:12.770 10:29:49 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:12.770 "params": { 00:22:12.770 "name": "Nvme1", 00:22:12.770 "trtype": "rdma", 00:22:12.770 "traddr": "192.168.100.8", 00:22:12.770 "adrfam": "ipv4", 00:22:12.770 "trsvcid": "4420", 00:22:12.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:12.770 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:12.770 "hdgst": false, 00:22:12.770 "ddgst": false 00:22:12.770 }, 00:22:12.770 "method": "bdev_nvme_attach_controller" 00:22:12.770 },{ 00:22:12.770 "params": { 00:22:12.770 "name": "Nvme2", 00:22:12.770 "trtype": "rdma", 00:22:12.770 "traddr": "192.168.100.8", 00:22:12.770 "adrfam": "ipv4", 00:22:12.770 "trsvcid": "4420", 00:22:12.770 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:12.770 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:12.770 "hdgst": false, 00:22:12.770 "ddgst": false 00:22:12.770 }, 00:22:12.770 "method": "bdev_nvme_attach_controller" 00:22:12.770 },{ 00:22:12.770 "params": { 00:22:12.770 "name": "Nvme3", 00:22:12.770 "trtype": "rdma", 00:22:12.770 "traddr": "192.168.100.8", 00:22:12.770 "adrfam": "ipv4", 00:22:12.770 "trsvcid": "4420", 00:22:12.770 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:12.770 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:12.770 "hdgst": false, 00:22:12.770 "ddgst": false 00:22:12.770 }, 00:22:12.770 "method": "bdev_nvme_attach_controller" 00:22:12.770 },{ 00:22:12.770 "params": { 00:22:12.770 "name": "Nvme4", 00:22:12.770 "trtype": "rdma", 00:22:12.770 "traddr": "192.168.100.8", 00:22:12.770 "adrfam": "ipv4", 00:22:12.770 "trsvcid": "4420", 00:22:12.770 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:12.770 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:12.770 "hdgst": false, 00:22:12.770 "ddgst": false 00:22:12.770 }, 00:22:12.770 "method": "bdev_nvme_attach_controller" 00:22:12.770 },{ 00:22:12.770 "params": { 00:22:12.770 "name": "Nvme5", 00:22:12.770 "trtype": "rdma", 00:22:12.770 "traddr": "192.168.100.8", 00:22:12.770 "adrfam": "ipv4", 00:22:12.770 "trsvcid": "4420", 00:22:12.770 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:12.770 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:12.770 "hdgst": false, 00:22:12.770 "ddgst": false 00:22:12.770 }, 00:22:12.770 "method": "bdev_nvme_attach_controller" 00:22:12.770 },{ 00:22:12.770 "params": { 00:22:12.770 "name": "Nvme6", 00:22:12.770 "trtype": "rdma", 00:22:12.770 "traddr": "192.168.100.8", 00:22:12.770 "adrfam": "ipv4", 00:22:12.770 "trsvcid": "4420", 00:22:12.770 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:12.770 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:12.770 "hdgst": false, 00:22:12.770 "ddgst": false 00:22:12.770 }, 00:22:12.770 "method": "bdev_nvme_attach_controller" 00:22:12.770 },{ 00:22:12.770 "params": { 00:22:12.770 "name": "Nvme7", 00:22:12.770 "trtype": "rdma", 00:22:12.770 "traddr": "192.168.100.8", 00:22:12.770 "adrfam": "ipv4", 00:22:12.770 "trsvcid": "4420", 00:22:12.770 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:12.770 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:12.770 "hdgst": false, 00:22:12.770 "ddgst": false 00:22:12.770 }, 00:22:12.770 "method": "bdev_nvme_attach_controller" 00:22:12.770 },{ 00:22:12.770 "params": { 00:22:12.770 "name": "Nvme8", 00:22:12.770 "trtype": "rdma", 00:22:12.770 "traddr": "192.168.100.8", 00:22:12.770 "adrfam": "ipv4", 00:22:12.770 "trsvcid": "4420", 00:22:12.770 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:12.770 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:12.770 "hdgst": false, 00:22:12.770 "ddgst": false 00:22:12.770 }, 00:22:12.770 "method": "bdev_nvme_attach_controller" 00:22:12.770 },{ 00:22:12.770 "params": { 00:22:12.770 "name": "Nvme9", 00:22:12.770 "trtype": "rdma", 00:22:12.770 "traddr": "192.168.100.8", 00:22:12.770 "adrfam": "ipv4", 00:22:12.770 "trsvcid": "4420", 00:22:12.770 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:12.770 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:12.770 "hdgst": false, 00:22:12.770 "ddgst": false 00:22:12.770 }, 00:22:12.770 "method": "bdev_nvme_attach_controller" 00:22:12.770 },{ 00:22:12.770 "params": { 00:22:12.770 "name": "Nvme10", 00:22:12.770 "trtype": "rdma", 00:22:12.770 "traddr": "192.168.100.8", 00:22:12.770 "adrfam": "ipv4", 00:22:12.770 "trsvcid": "4420", 00:22:12.770 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:12.770 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:12.770 "hdgst": false, 00:22:12.770 "ddgst": false 00:22:12.770 }, 00:22:12.770 "method": "bdev_nvme_attach_controller" 00:22:12.770 }' 00:22:12.770 [2024-07-15 10:29:49.789519] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.770 [2024-07-15 10:29:49.854336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.707 Running I/O for 10 seconds... 00:22:13.707 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:13.707 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:22:13.707 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:13.707 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.707 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:13.967 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.967 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.967 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:13.967 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:13.967 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:13.968 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:13.968 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:13.968 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:13.968 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:13.968 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:13.968 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:13.968 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.968 10:29:50 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:13.968 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.968 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:13.968 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:13.968 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:14.228 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:14.228 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:14.228 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:14.228 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:14.228 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.228 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=155 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 155 -ge 100 ']' 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2994325 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2994325 ']' 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2994325 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2994325 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2994325' 00:22:14.487 killing process with pid 2994325 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2994325 00:22:14.487 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2994325 00:22:15.055 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:15.055 10:29:51 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:15.629 [2024-07-15 10:29:52.718279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.718326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:3eff200 sqhd:c130 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.718338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.718346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:3eff200 sqhd:c130 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.718354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.718361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:3eff200 sqhd:c130 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.718369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.718383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32767 cdw0:3eff200 sqhd:c130 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.721469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:15.629 [2024-07-15 10:29:52.721501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:15.629 [2024-07-15 10:29:52.721528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.721538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.721547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.721554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.721562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.721569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.721577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.721584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.724006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:15.629 [2024-07-15 10:29:52.724019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:15.629 [2024-07-15 10:29:52.724034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.724041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.724049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.724057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.724065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.724071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.724079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.724086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.726459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:15.629 [2024-07-15 10:29:52.726471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:15.629 [2024-07-15 10:29:52.726484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.726492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.726499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.726510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.629 [2024-07-15 10:29:52.726518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.629 [2024-07-15 10:29:52.726525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.726533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.726540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.729108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:15.630 [2024-07-15 10:29:52.729120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:15.630 [2024-07-15 10:29:52.729135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.729142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.729150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.729157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.729165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.729172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.729180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.729187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.731857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:15.630 [2024-07-15 10:29:52.731869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:15.630 [2024-07-15 10:29:52.731882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.731889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.731897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.731904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.731912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.731919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.731927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.731934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.734674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:15.630 [2024-07-15 10:29:52.734686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:15.630 [2024-07-15 10:29:52.734699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.734706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.734714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.734721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.734729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.734736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.734744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.734751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.736870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:15.630 [2024-07-15 10:29:52.736881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:15.630 [2024-07-15 10:29:52.736894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.736902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.736910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.736917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.736925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.736932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.736939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.736946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.739423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:15.630 [2024-07-15 10:29:52.739434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:15.630 [2024-07-15 10:29:52.739448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.739455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.739463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.739470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.739481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.739488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.739495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.739502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.741971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:15.630 [2024-07-15 10:29:52.741983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:15.630 [2024-07-15 10:29:52.741996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.742003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.742011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.742018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.742026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.742033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.742040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:15.630 [2024-07-15 10:29:52.742047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:61362 cdw0:3eff200 sqhd:df00 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.744136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:15.630 [2024-07-15 10:29:52.744148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:15.630 [2024-07-15 10:29:52.746177] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001927b700 was disconnected and freed. reset controller. 00:22:15.630 [2024-07-15 10:29:52.746190] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.630 [2024-07-15 10:29:52.748428] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001927b480 was disconnected and freed. reset controller. 00:22:15.630 [2024-07-15 10:29:52.748441] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.630 [2024-07-15 10:29:52.750993] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001927b200 was disconnected and freed. reset controller. 00:22:15.630 [2024-07-15 10:29:52.751005] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.630 [2024-07-15 10:29:52.753376] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:22:15.630 [2024-07-15 10:29:52.753389] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.630 [2024-07-15 10:29:52.755828] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ec00 was disconnected and freed. reset controller. 00:22:15.630 [2024-07-15 10:29:52.755839] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.630 [2024-07-15 10:29:52.755917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afe00 len:0x10000 key:0x184000 00:22:15.630 [2024-07-15 10:29:52.755927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.755944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x184000 00:22:15.630 [2024-07-15 10:29:52.755952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.755965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a38fd00 len:0x10000 key:0x184000 00:22:15.630 [2024-07-15 10:29:52.755973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.755985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a37fc80 len:0x10000 key:0x184000 00:22:15.630 [2024-07-15 10:29:52.755993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.756004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x184000 00:22:15.630 [2024-07-15 10:29:52.756012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.630 [2024-07-15 10:29:52.756024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a35fb80 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a34fb00 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a31f980 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ff880 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2ef800 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2cf700 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2bf680 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a29f580 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a27f480 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a26f400 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a23f280 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a22f200 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x184000 00:22:15.631 [2024-07-15 10:29:52.756462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5f0000 len:0x10000 key:0x183400 00:22:15.631 [2024-07-15 10:29:52.756481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183400 00:22:15.631 [2024-07-15 10:29:52.756501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x183400 00:22:15.631 [2024-07-15 10:29:52.756521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183400 00:22:15.631 [2024-07-15 10:29:52.756540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183400 00:22:15.631 [2024-07-15 10:29:52.756560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a59fd80 len:0x10000 key:0x183400 00:22:15.631 [2024-07-15 10:29:52.756579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x183400 00:22:15.631 [2024-07-15 10:29:52.756599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x183400 00:22:15.631 [2024-07-15 10:29:52.756620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.756632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0efe00 len:0x10000 key:0x183100 00:22:15.631 [2024-07-15 10:29:52.756640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.758819] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e980 was disconnected and freed. reset controller. 00:22:15.631 [2024-07-15 10:29:52.758831] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.631 [2024-07-15 10:29:52.758908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a40f700 len:0x10000 key:0x183400 00:22:15.631 [2024-07-15 10:29:52.758917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.758930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7f0000 len:0x10000 key:0x183500 00:22:15.631 [2024-07-15 10:29:52.758939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.758951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7dff80 len:0x10000 key:0x183500 00:22:15.631 [2024-07-15 10:29:52.758959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.758971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7cff00 len:0x10000 key:0x183500 00:22:15.631 [2024-07-15 10:29:52.758979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.758991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7bfe80 len:0x10000 key:0x183500 00:22:15.631 [2024-07-15 10:29:52.758999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.759012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7afe00 len:0x10000 key:0x183500 00:22:15.631 [2024-07-15 10:29:52.759019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.759031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a79fd80 len:0x10000 key:0x183500 00:22:15.631 [2024-07-15 10:29:52.759039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.631 [2024-07-15 10:29:52.759051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183500 00:22:15.631 [2024-07-15 10:29:52.759059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a76fc00 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a74fb00 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a71f980 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff880 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ef800 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6cf700 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a66f400 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a65f380 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a63f280 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a62f200 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183500 00:22:15.632 [2024-07-15 10:29:52.759547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9f0000 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9cff00 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9bfe80 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a99fd80 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a98fd00 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a96fc00 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a94fb00 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a93fa80 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a90f900 len:0x10000 key:0x183a00 00:22:15.632 [2024-07-15 10:29:52.759840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.632 [2024-07-15 10:29:52.759852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ff880 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.759859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.759872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.759879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.759891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.759898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.759910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.759918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.759931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.759938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.759950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.759958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.759969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.759979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.759991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.759999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.760011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.760019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.760031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.760039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.760051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.760058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.760070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a46fa00 len:0x10000 key:0x183400 00:22:15.633 [2024-07-15 10:29:52.760078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.760090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f939000 len:0x10000 key:0x184300 00:22:15.633 [2024-07-15 10:29:52.760097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.760112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x184300 00:22:15.633 [2024-07-15 10:29:52.760119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.760132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f7000 len:0x10000 key:0x184300 00:22:15.633 [2024-07-15 10:29:52.760140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.760153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x184300 00:22:15.633 [2024-07-15 10:29:52.760160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.760173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8b5000 len:0x10000 key:0x184300 00:22:15.633 [2024-07-15 10:29:52.760181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763297] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e700 was disconnected and freed. reset controller. 00:22:15.633 [2024-07-15 10:29:52.763313] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.633 [2024-07-15 10:29:52.763325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa3f880 len:0x10000 key:0x184100 00:22:15.633 [2024-07-15 10:29:52.763333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x184100 00:22:15.633 [2024-07-15 10:29:52.763355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x184100 00:22:15.633 [2024-07-15 10:29:52.763375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x184100 00:22:15.633 [2024-07-15 10:29:52.763395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.763414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.763435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a82f200 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.763455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.763476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a80f100 len:0x10000 key:0x183a00 00:22:15.633 [2024-07-15 10:29:52.763495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adf0000 len:0x10000 key:0x183700 00:22:15.633 [2024-07-15 10:29:52.763514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183700 00:22:15.633 [2024-07-15 10:29:52.763534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183700 00:22:15.633 [2024-07-15 10:29:52.763555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183700 00:22:15.633 [2024-07-15 10:29:52.763577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.633 [2024-07-15 10:29:52.763589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183700 00:22:15.633 [2024-07-15 10:29:52.763597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.763983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.763992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.764010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.764030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.764050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.764070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.764090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183700 00:22:15.634 [2024-07-15 10:29:52.764109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183e00 00:22:15.634 [2024-07-15 10:29:52.764353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.634 [2024-07-15 10:29:52.764365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183e00 00:22:15.635 [2024-07-15 10:29:52.764373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.764385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x184100 00:22:15.635 [2024-07-15 10:29:52.764393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.764404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dc17000 len:0x10000 key:0x184300 00:22:15.635 [2024-07-15 10:29:52.764412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.764426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dbf6000 len:0x10000 key:0x184300 00:22:15.635 [2024-07-15 10:29:52.764434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.764449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013653000 len:0x10000 key:0x184300 00:22:15.635 [2024-07-15 10:29:52.764456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.764469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013632000 len:0x10000 key:0x184300 00:22:15.635 [2024-07-15 10:29:52.764477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.764490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013611000 len:0x10000 key:0x184300 00:22:15.635 [2024-07-15 10:29:52.764498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.764510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135f0000 len:0x10000 key:0x184300 00:22:15.635 [2024-07-15 10:29:52.764518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.764530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001254f000 len:0x10000 key:0x184300 00:22:15.635 [2024-07-15 10:29:52.764538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.764551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001252e000 len:0x10000 key:0x184300 00:22:15.635 [2024-07-15 10:29:52.764558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.764571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001250d000 len:0x10000 key:0x184300 00:22:15.635 [2024-07-15 10:29:52.764578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.764592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124ec000 len:0x10000 key:0x184300 00:22:15.635 [2024-07-15 10:29:52.764599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767732] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e480 was disconnected and freed. reset controller. 00:22:15.635 [2024-07-15 10:29:52.767744] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.635 [2024-07-15 10:29:52.767755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183e00 00:22:15.635 [2024-07-15 10:29:52.767763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.767786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.767808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.767828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.767848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.767869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.767889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.767909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.767929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.767948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.767969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.767981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.767988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.635 [2024-07-15 10:29:52.768288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183f00 00:22:15.635 [2024-07-15 10:29:52.768296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183f00 00:22:15.636 [2024-07-15 10:29:52.768316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183f00 00:22:15.636 [2024-07-15 10:29:52.768335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183f00 00:22:15.636 [2024-07-15 10:29:52.768355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183f00 00:22:15.636 [2024-07-15 10:29:52.768374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183f00 00:22:15.636 [2024-07-15 10:29:52.768394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.768987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.768995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.769007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183200 00:22:15.636 [2024-07-15 10:29:52.769015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.769027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae1f780 len:0x10000 key:0x183e00 00:22:15.636 [2024-07-15 10:29:52.769034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.772166] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60e200 was disconnected and freed. reset controller. 00:22:15.636 [2024-07-15 10:29:52.772178] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.636 [2024-07-15 10:29:52.772190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x184300 00:22:15.636 [2024-07-15 10:29:52.772197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.636 [2024-07-15 10:29:52.772211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001061d000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105fc000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105db000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000105ba000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010599000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010515000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104f4000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104d3000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000104b2000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010491000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010470000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d584000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d563000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d542000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d521000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d500000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010ba8000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010bc9000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010bea000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c0b000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c2c000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c4d000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c6e000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c8f000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135cf000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135ae000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001358d000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001356c000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001354b000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.637 [2024-07-15 10:29:52.772903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001352a000 len:0x10000 key:0x184300 00:22:15.637 [2024-07-15 10:29:52.772911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.772923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013509000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.772931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.772944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000134e8000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.772952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.772965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e51d000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.772973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.772985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4fc000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.772992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4db000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e4ba000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e499000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e478000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e457000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e436000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e415000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3f4000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3d3000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3b2000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106a1000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010680000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000107ca000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000107eb000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e370000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e391000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001080c000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001082d000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001084e000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001086f000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000137df000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000137be000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001379d000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001377c000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.773509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001375b000 len:0x10000 key:0x184300 00:22:15.638 [2024-07-15 10:29:52.773517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:36c0000 sqhd:52b0 p:0 m:0 dnr:0 00:22:15.638 [2024-07-15 10:29:52.794738] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60df80 was disconnected and freed. reset controller. 00:22:15.638 [2024-07-15 10:29:52.794783] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.638 [2024-07-15 10:29:52.794970] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.638 [2024-07-15 10:29:52.795007] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.638 [2024-07-15 10:29:52.795038] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.638 [2024-07-15 10:29:52.795068] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.638 [2024-07-15 10:29:52.795102] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.638 [2024-07-15 10:29:52.795134] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.638 [2024-07-15 10:29:52.795164] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.638 [2024-07-15 10:29:52.795194] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.638 [2024-07-15 10:29:52.795225] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.638 [2024-07-15 10:29:52.795245] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:15.638 [2024-07-15 10:29:52.802919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:15.638 [2024-07-15 10:29:52.802943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:15.638 [2024-07-15 10:29:52.802952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:15.638 [2024-07-15 10:29:52.802961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:15.919 task offset: 35840 on job bdev=Nvme1n1 fails 00:22:15.919 00:22:15.919 Latency(us) 00:22:15.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.919 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.919 Job: Nvme1n1 ended in about 2.05 seconds with error 00:22:15.919 Verification LBA range: start 0x0 length 0x400 00:22:15.919 Nvme1n1 : 2.05 125.05 7.82 31.26 0.00 406427.31 30583.47 1048576.00 00:22:15.919 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.919 Job: Nvme2n1 ended in about 2.05 seconds with error 00:22:15.919 Verification LBA range: start 0x0 length 0x400 00:22:15.919 Nvme2n1 : 2.05 125.96 7.87 31.25 0.00 400112.18 3440.64 1048576.00 00:22:15.919 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.919 Job: Nvme3n1 ended in about 2.05 seconds with error 00:22:15.919 Verification LBA range: start 0x0 length 0x400 00:22:15.919 Nvme3n1 : 2.05 138.09 8.63 31.23 0.00 367448.97 4177.92 1048576.00 00:22:15.919 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.919 Job: Nvme4n1 ended in about 2.05 seconds with error 00:22:15.919 Verification LBA range: start 0x0 length 0x400 00:22:15.919 Nvme4n1 : 2.05 127.78 7.99 31.21 0.00 387196.64 12779.52 1048576.00 00:22:15.919 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.919 Job: Nvme5n1 ended in about 2.05 seconds with error 00:22:15.919 Verification LBA range: start 0x0 length 0x400 00:22:15.919 Nvme5n1 : 2.05 128.68 8.04 31.20 0.00 380796.71 12670.29 1048576.00 00:22:15.919 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.919 Job: Nvme6n1 ended in about 2.05 seconds with error 00:22:15.919 Verification LBA range: start 0x0 length 0x400 00:22:15.919 Nvme6n1 : 2.05 127.65 7.98 31.18 0.00 374373.06 15837.87 1111490.56 00:22:15.919 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.919 Job: Nvme7n1 ended in about 2.01 seconds with error 00:22:15.919 Verification LBA range: start 0x0 length 0x400 00:22:15.919 Nvme7n1 : 2.01 127.24 7.95 31.81 0.00 376218.79 18568.53 1104500.05 00:22:15.919 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.919 Job: Nvme8n1 ended in about 2.02 seconds with error 00:22:15.919 Verification LBA range: start 0x0 length 0x400 00:22:15.919 Nvme8n1 : 2.02 126.96 7.94 31.74 0.00 373150.04 20862.29 1090519.04 00:22:15.919 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.919 Job: Nvme9n1 ended in about 2.02 seconds with error 00:22:15.919 Verification LBA range: start 0x0 length 0x400 00:22:15.919 Nvme9n1 : 2.02 126.69 7.92 31.67 0.00 370035.03 51336.53 1076538.03 00:22:15.919 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:15.919 Job: Nvme10n1 ended in about 2.03 seconds with error 00:22:15.919 Verification LBA range: start 0x0 length 0x400 00:22:15.919 Nvme10n1 : 2.03 31.60 1.98 31.60 0.00 917343.57 52428.80 1062557.01 00:22:15.919 =================================================================================================================== 00:22:15.919 Total : 1185.70 74.11 314.15 0.00 404056.24 3440.64 1111490.56 00:22:15.919 [2024-07-15 10:29:52.825639] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:15.919 [2024-07-15 10:29:52.827276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:15.919 [2024-07-15 10:29:52.827292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:15.919 [2024-07-15 10:29:52.827301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:15.919 [2024-07-15 10:29:52.827311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:15.919 [2024-07-15 10:29:52.827320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:15.919 [2024-07-15 10:29:52.827328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:15.919 [2024-07-15 10:29:52.843726] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:15.919 [2024-07-15 10:29:52.843748] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:15.919 [2024-07-15 10:29:52.843755] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d7000 00:22:15.919 [2024-07-15 10:29:52.843866] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:15.919 [2024-07-15 10:29:52.843875] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:15.919 [2024-07-15 10:29:52.843881] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192cf100 00:22:15.919 [2024-07-15 10:29:52.843996] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:15.919 [2024-07-15 10:29:52.844004] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:15.919 [2024-07-15 10:29:52.844010] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bec00 00:22:15.919 [2024-07-15 10:29:52.844093] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:15.919 [2024-07-15 10:29:52.844101] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:15.919 [2024-07-15 10:29:52.844107] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192b61c0 00:22:15.919 [2024-07-15 10:29:52.844224] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:15.919 [2024-07-15 10:29:52.844241] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:15.919 [2024-07-15 10:29:52.844247] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:22:15.919 [2024-07-15 10:29:52.844427] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:15.919 [2024-07-15 10:29:52.844436] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:15.919 [2024-07-15 10:29:52.844442] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:22:15.919 [2024-07-15 10:29:52.844588] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:15.919 [2024-07-15 10:29:52.844596] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:15.919 [2024-07-15 10:29:52.844602] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019290040 00:22:15.919 [2024-07-15 10:29:52.844683] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:15.919 [2024-07-15 10:29:52.844692] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:15.919 [2024-07-15 10:29:52.844698] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001926e8c0 00:22:15.919 [2024-07-15 10:29:52.844843] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:15.919 [2024-07-15 10:29:52.844852] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:15.919 [2024-07-15 10:29:52.844858] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200019298000 00:22:15.919 [2024-07-15 10:29:52.844934] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:22:15.919 [2024-07-15 10:29:52.844944] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:22:15.919 [2024-07-15 10:29:52.844949] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192acfc0 00:22:15.919 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2994715 00:22:15.919 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:15.919 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:15.919 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:15.919 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:15.919 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:15.920 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.920 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:15.920 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:15.920 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:15.920 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:15.920 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.920 10:29:52 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:15.920 rmmod nvme_rdma 00:22:15.920 rmmod nvme_fabrics 00:22:15.920 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 2994715 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:15.920 00:22:15.920 real 0m5.103s 00:22:15.920 user 0m17.564s 00:22:15.920 sys 0m0.995s 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:15.920 ************************************ 00:22:15.920 END TEST nvmf_shutdown_tc3 00:22:15.920 ************************************ 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:15.920 00:22:15.920 real 0m25.526s 00:22:15.920 user 1m10.907s 00:22:15.920 sys 0m9.074s 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:15.920 10:29:53 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:15.920 ************************************ 00:22:15.920 END TEST nvmf_shutdown 00:22:15.920 ************************************ 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:16.185 10:29:53 nvmf_rdma -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:16.185 10:29:53 nvmf_rdma -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:16.185 10:29:53 nvmf_rdma -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:16.185 10:29:53 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:16.185 ************************************ 00:22:16.185 START TEST nvmf_multicontroller 00:22:16.185 ************************************ 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:22:16.185 * Looking for test storage... 00:22:16.185 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:16.185 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:22:16.185 00:22:16.185 real 0m0.114s 00:22:16.185 user 0m0.046s 00:22:16.185 sys 0m0.073s 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:16.185 10:29:53 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:16.185 ************************************ 00:22:16.185 END TEST nvmf_multicontroller 00:22:16.185 ************************************ 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:16.185 10:29:53 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:16.185 10:29:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:16.446 ************************************ 00:22:16.446 START TEST nvmf_aer 00:22:16.446 ************************************ 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:22:16.446 * Looking for test storage... 00:22:16.446 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.446 10:29:53 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.447 10:29:53 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:24.585 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:24.585 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:24.585 Found net devices under 0000:98:00.0: mlx_0_0 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:24.585 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:24.586 Found net devices under 0000:98:00.1: mlx_0_1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:24.586 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:24.586 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:24.586 altname enp152s0f0np0 00:22:24.586 altname ens817f0np0 00:22:24.586 inet 192.168.100.8/24 scope global mlx_0_0 00:22:24.586 valid_lft forever preferred_lft forever 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:24.586 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:24.586 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:24.586 altname enp152s0f1np1 00:22:24.586 altname ens817f1np1 00:22:24.586 inet 192.168.100.9/24 scope global mlx_0_1 00:22:24.586 valid_lft forever preferred_lft forever 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:24.586 192.168.100.9' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:24.586 192.168.100.9' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:24.586 192.168.100.9' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2999796 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2999796 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2999796 ']' 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:24.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:24.586 10:30:01 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:24.586 [2024-07-15 10:30:01.626121] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:24.586 [2024-07-15 10:30:01.626191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.586 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.586 [2024-07-15 10:30:01.701842] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:24.586 [2024-07-15 10:30:01.776715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.586 [2024-07-15 10:30:01.776758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.587 [2024-07-15 10:30:01.776767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.587 [2024-07-15 10:30:01.776773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.587 [2024-07-15 10:30:01.776779] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.587 [2024-07-15 10:30:01.776918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.587 [2024-07-15 10:30:01.777035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:24.587 [2024-07-15 10:30:01.777193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.587 [2024-07-15 10:30:01.777193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.527 [2024-07-15 10:30:02.492697] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2446200/0x244a6f0) succeed. 00:22:25.527 [2024-07-15 10:30:02.507200] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2447840/0x248bd80) succeed. 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.527 Malloc0 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.527 [2024-07-15 10:30:02.682458] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.527 [ 00:22:25.527 { 00:22:25.527 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:25.527 "subtype": "Discovery", 00:22:25.527 "listen_addresses": [], 00:22:25.527 "allow_any_host": true, 00:22:25.527 "hosts": [] 00:22:25.527 }, 00:22:25.527 { 00:22:25.527 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.527 "subtype": "NVMe", 00:22:25.527 "listen_addresses": [ 00:22:25.527 { 00:22:25.527 "trtype": "RDMA", 00:22:25.527 "adrfam": "IPv4", 00:22:25.527 "traddr": "192.168.100.8", 00:22:25.527 "trsvcid": "4420" 00:22:25.527 } 00:22:25.527 ], 00:22:25.527 "allow_any_host": true, 00:22:25.527 "hosts": [], 00:22:25.527 "serial_number": "SPDK00000000000001", 00:22:25.527 "model_number": "SPDK bdev Controller", 00:22:25.527 "max_namespaces": 2, 00:22:25.527 "min_cntlid": 1, 00:22:25.527 "max_cntlid": 65519, 00:22:25.527 "namespaces": [ 00:22:25.527 { 00:22:25.527 "nsid": 1, 00:22:25.527 "bdev_name": "Malloc0", 00:22:25.527 "name": "Malloc0", 00:22:25.527 "nguid": "563D8201F79A4EABAB1719E3641CE95F", 00:22:25.527 "uuid": "563d8201-f79a-4eab-ab17-19e3641ce95f" 00:22:25.527 } 00:22:25.527 ] 00:22:25.527 } 00:22:25.527 ] 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=2999970 00:22:25.527 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:25.528 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:25.528 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:25.528 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:25.528 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:25.528 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:25.528 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:25.788 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.788 Malloc1 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:25.788 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:25.788 [ 00:22:25.788 { 00:22:25.788 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:25.788 "subtype": "Discovery", 00:22:25.788 "listen_addresses": [], 00:22:25.788 "allow_any_host": true, 00:22:25.788 "hosts": [] 00:22:25.788 }, 00:22:25.788 { 00:22:25.788 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:25.788 "subtype": "NVMe", 00:22:25.788 "listen_addresses": [ 00:22:25.788 { 00:22:25.788 "trtype": "RDMA", 00:22:25.788 "adrfam": "IPv4", 00:22:25.788 "traddr": "192.168.100.8", 00:22:25.788 "trsvcid": "4420" 00:22:25.788 } 00:22:25.788 ], 00:22:25.788 "allow_any_host": true, 00:22:25.788 "hosts": [], 00:22:25.788 "serial_number": "SPDK00000000000001", 00:22:25.788 "model_number": "SPDK bdev Controller", 00:22:26.049 "max_namespaces": 2, 00:22:26.049 "min_cntlid": 1, 00:22:26.049 "max_cntlid": 65519, 00:22:26.049 "namespaces": [ 00:22:26.049 { 00:22:26.049 "nsid": 1, 00:22:26.049 "bdev_name": "Malloc0", 00:22:26.049 "name": "Malloc0", 00:22:26.049 "nguid": "563D8201F79A4EABAB1719E3641CE95F", 00:22:26.049 "uuid": "563d8201-f79a-4eab-ab17-19e3641ce95f" 00:22:26.049 }, 00:22:26.049 { 00:22:26.049 "nsid": 2, 00:22:26.049 "bdev_name": "Malloc1", 00:22:26.049 "name": "Malloc1", 00:22:26.049 "nguid": "32BF64005E964AC9A197DED01BB1EE6D", 00:22:26.049 "uuid": "32bf6400-5e96-4ac9-a197-ded01bb1ee6d" 00:22:26.049 } 00:22:26.049 ] 00:22:26.049 } 00:22:26.049 ] 00:22:26.049 10:30:02 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.049 10:30:02 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 2999970 00:22:26.049 Asynchronous Event Request test 00:22:26.049 Attaching to 192.168.100.8 00:22:26.049 Attached to 192.168.100.8 00:22:26.049 Registering asynchronous event callbacks... 00:22:26.049 Starting namespace attribute notice tests for all controllers... 00:22:26.049 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:26.049 aer_cb - Changed Namespace 00:22:26.049 Cleaning up... 00:22:26.049 10:30:03 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:26.050 rmmod nvme_rdma 00:22:26.050 rmmod nvme_fabrics 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2999796 ']' 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2999796 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2999796 ']' 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2999796 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2999796 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2999796' 00:22:26.050 killing process with pid 2999796 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2999796 00:22:26.050 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2999796 00:22:26.310 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:26.310 10:30:03 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:26.310 00:22:26.310 real 0m10.018s 00:22:26.310 user 0m8.799s 00:22:26.310 sys 0m6.437s 00:22:26.310 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:26.310 10:30:03 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:26.310 ************************************ 00:22:26.310 END TEST nvmf_aer 00:22:26.310 ************************************ 00:22:26.310 10:30:03 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:26.310 10:30:03 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:22:26.310 10:30:03 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:26.310 10:30:03 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.310 10:30:03 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:26.310 ************************************ 00:22:26.310 START TEST nvmf_async_init 00:22:26.310 ************************************ 00:22:26.310 10:30:03 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:22:26.570 * Looking for test storage... 00:22:26.570 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.570 10:30:03 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=10d28b758c224d26814ebd67e230e0e9 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.571 10:30:03 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:34.714 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:34.714 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:34.714 Found net devices under 0000:98:00.0: mlx_0_0 00:22:34.714 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:34.715 Found net devices under 0000:98:00.1: mlx_0_1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:34.715 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:34.715 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:34.715 altname enp152s0f0np0 00:22:34.715 altname ens817f0np0 00:22:34.715 inet 192.168.100.8/24 scope global mlx_0_0 00:22:34.715 valid_lft forever preferred_lft forever 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:34.715 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:34.715 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:34.715 altname enp152s0f1np1 00:22:34.715 altname ens817f1np1 00:22:34.715 inet 192.168.100.9/24 scope global mlx_0_1 00:22:34.715 valid_lft forever preferred_lft forever 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:34.715 192.168.100.9' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:34.715 192.168.100.9' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:34.715 192.168.100.9' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3004750 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3004750 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3004750 ']' 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.715 10:30:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.716 10:30:11 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:34.716 [2024-07-15 10:30:11.822228] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:34.716 [2024-07-15 10:30:11.822304] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.716 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.716 [2024-07-15 10:30:11.897532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.976 [2024-07-15 10:30:11.972037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.976 [2024-07-15 10:30:11.972080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.977 [2024-07-15 10:30:11.972088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.977 [2024-07-15 10:30:11.972095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.977 [2024-07-15 10:30:11.972101] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.977 [2024-07-15 10:30:11.972121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.545 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.546 [2024-07-15 10:30:12.681202] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x126bf90/0x1270480) succeed. 00:22:35.546 [2024-07-15 10:30:12.694451] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x126d490/0x12b1b10) succeed. 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.546 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.806 null0 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 10d28b758c224d26814ebd67e230e0e9 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.806 [2024-07-15 10:30:12.790777] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.806 nvme0n1 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.806 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.806 [ 00:22:35.806 { 00:22:35.806 "name": "nvme0n1", 00:22:35.806 "aliases": [ 00:22:35.806 "10d28b75-8c22-4d26-814e-bd67e230e0e9" 00:22:35.806 ], 00:22:35.806 "product_name": "NVMe disk", 00:22:35.806 "block_size": 512, 00:22:35.806 "num_blocks": 2097152, 00:22:35.806 "uuid": "10d28b75-8c22-4d26-814e-bd67e230e0e9", 00:22:35.806 "assigned_rate_limits": { 00:22:35.806 "rw_ios_per_sec": 0, 00:22:35.806 "rw_mbytes_per_sec": 0, 00:22:35.806 "r_mbytes_per_sec": 0, 00:22:35.806 "w_mbytes_per_sec": 0 00:22:35.806 }, 00:22:35.806 "claimed": false, 00:22:35.806 "zoned": false, 00:22:35.806 "supported_io_types": { 00:22:35.806 "read": true, 00:22:35.806 "write": true, 00:22:35.806 "unmap": false, 00:22:35.806 "flush": true, 00:22:35.806 "reset": true, 00:22:35.806 "nvme_admin": true, 00:22:35.806 "nvme_io": true, 00:22:35.806 "nvme_io_md": false, 00:22:35.806 "write_zeroes": true, 00:22:35.806 "zcopy": false, 00:22:35.806 "get_zone_info": false, 00:22:35.806 "zone_management": false, 00:22:35.806 "zone_append": false, 00:22:35.806 "compare": true, 00:22:35.806 "compare_and_write": true, 00:22:35.806 "abort": true, 00:22:35.806 "seek_hole": false, 00:22:35.806 "seek_data": false, 00:22:35.806 "copy": true, 00:22:35.806 "nvme_iov_md": false 00:22:35.806 }, 00:22:35.806 "memory_domains": [ 00:22:35.806 { 00:22:35.806 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:35.806 "dma_device_type": 0 00:22:35.806 } 00:22:35.806 ], 00:22:35.806 "driver_specific": { 00:22:35.806 "nvme": [ 00:22:35.806 { 00:22:35.806 "trid": { 00:22:35.806 "trtype": "RDMA", 00:22:35.806 "adrfam": "IPv4", 00:22:35.806 "traddr": "192.168.100.8", 00:22:35.806 "trsvcid": "4420", 00:22:35.806 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:35.806 }, 00:22:35.806 "ctrlr_data": { 00:22:35.806 "cntlid": 1, 00:22:35.806 "vendor_id": "0x8086", 00:22:35.806 "model_number": "SPDK bdev Controller", 00:22:35.806 "serial_number": "00000000000000000000", 00:22:35.807 "firmware_revision": "24.09", 00:22:35.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:35.807 "oacs": { 00:22:35.807 "security": 0, 00:22:35.807 "format": 0, 00:22:35.807 "firmware": 0, 00:22:35.807 "ns_manage": 0 00:22:35.807 }, 00:22:35.807 "multi_ctrlr": true, 00:22:35.807 "ana_reporting": false 00:22:35.807 }, 00:22:35.807 "vs": { 00:22:35.807 "nvme_version": "1.3" 00:22:35.807 }, 00:22:35.807 "ns_data": { 00:22:35.807 "id": 1, 00:22:35.807 "can_share": true 00:22:35.807 } 00:22:35.807 } 00:22:35.807 ], 00:22:35.807 "mp_policy": "active_passive" 00:22:35.807 } 00:22:35.807 } 00:22:35.807 ] 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.807 [2024-07-15 10:30:12.918975] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:35.807 [2024-07-15 10:30:12.945242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:35.807 [2024-07-15 10:30:12.972726] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:35.807 [ 00:22:35.807 { 00:22:35.807 "name": "nvme0n1", 00:22:35.807 "aliases": [ 00:22:35.807 "10d28b75-8c22-4d26-814e-bd67e230e0e9" 00:22:35.807 ], 00:22:35.807 "product_name": "NVMe disk", 00:22:35.807 "block_size": 512, 00:22:35.807 "num_blocks": 2097152, 00:22:35.807 "uuid": "10d28b75-8c22-4d26-814e-bd67e230e0e9", 00:22:35.807 "assigned_rate_limits": { 00:22:35.807 "rw_ios_per_sec": 0, 00:22:35.807 "rw_mbytes_per_sec": 0, 00:22:35.807 "r_mbytes_per_sec": 0, 00:22:35.807 "w_mbytes_per_sec": 0 00:22:35.807 }, 00:22:35.807 "claimed": false, 00:22:35.807 "zoned": false, 00:22:35.807 "supported_io_types": { 00:22:35.807 "read": true, 00:22:35.807 "write": true, 00:22:35.807 "unmap": false, 00:22:35.807 "flush": true, 00:22:35.807 "reset": true, 00:22:35.807 "nvme_admin": true, 00:22:35.807 "nvme_io": true, 00:22:35.807 "nvme_io_md": false, 00:22:35.807 "write_zeroes": true, 00:22:35.807 "zcopy": false, 00:22:35.807 "get_zone_info": false, 00:22:35.807 "zone_management": false, 00:22:35.807 "zone_append": false, 00:22:35.807 "compare": true, 00:22:35.807 "compare_and_write": true, 00:22:35.807 "abort": true, 00:22:35.807 "seek_hole": false, 00:22:35.807 "seek_data": false, 00:22:35.807 "copy": true, 00:22:35.807 "nvme_iov_md": false 00:22:35.807 }, 00:22:35.807 "memory_domains": [ 00:22:35.807 { 00:22:35.807 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:35.807 "dma_device_type": 0 00:22:35.807 } 00:22:35.807 ], 00:22:35.807 "driver_specific": { 00:22:35.807 "nvme": [ 00:22:35.807 { 00:22:35.807 "trid": { 00:22:35.807 "trtype": "RDMA", 00:22:35.807 "adrfam": "IPv4", 00:22:35.807 "traddr": "192.168.100.8", 00:22:35.807 "trsvcid": "4420", 00:22:35.807 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:35.807 }, 00:22:35.807 "ctrlr_data": { 00:22:35.807 "cntlid": 2, 00:22:35.807 "vendor_id": "0x8086", 00:22:35.807 "model_number": "SPDK bdev Controller", 00:22:35.807 "serial_number": "00000000000000000000", 00:22:35.807 "firmware_revision": "24.09", 00:22:35.807 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:35.807 "oacs": { 00:22:35.807 "security": 0, 00:22:35.807 "format": 0, 00:22:35.807 "firmware": 0, 00:22:35.807 "ns_manage": 0 00:22:35.807 }, 00:22:35.807 "multi_ctrlr": true, 00:22:35.807 "ana_reporting": false 00:22:35.807 }, 00:22:35.807 "vs": { 00:22:35.807 "nvme_version": "1.3" 00:22:35.807 }, 00:22:35.807 "ns_data": { 00:22:35.807 "id": 1, 00:22:35.807 "can_share": true 00:22:35.807 } 00:22:35.807 } 00:22:35.807 ], 00:22:35.807 "mp_policy": "active_passive" 00:22:35.807 } 00:22:35.807 } 00:22:35.807 ] 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:35.807 10:30:12 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4pYL6zHfwv 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4pYL6zHfwv 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:36.068 [2024-07-15 10:30:13.058318] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4pYL6zHfwv 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4pYL6zHfwv 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:36.068 [2024-07-15 10:30:13.082381] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.068 nvme0n1 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.068 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:36.068 [ 00:22:36.068 { 00:22:36.068 "name": "nvme0n1", 00:22:36.068 "aliases": [ 00:22:36.069 "10d28b75-8c22-4d26-814e-bd67e230e0e9" 00:22:36.069 ], 00:22:36.069 "product_name": "NVMe disk", 00:22:36.069 "block_size": 512, 00:22:36.069 "num_blocks": 2097152, 00:22:36.069 "uuid": "10d28b75-8c22-4d26-814e-bd67e230e0e9", 00:22:36.069 "assigned_rate_limits": { 00:22:36.069 "rw_ios_per_sec": 0, 00:22:36.069 "rw_mbytes_per_sec": 0, 00:22:36.069 "r_mbytes_per_sec": 0, 00:22:36.069 "w_mbytes_per_sec": 0 00:22:36.069 }, 00:22:36.069 "claimed": false, 00:22:36.069 "zoned": false, 00:22:36.069 "supported_io_types": { 00:22:36.069 "read": true, 00:22:36.069 "write": true, 00:22:36.069 "unmap": false, 00:22:36.069 "flush": true, 00:22:36.069 "reset": true, 00:22:36.069 "nvme_admin": true, 00:22:36.069 "nvme_io": true, 00:22:36.069 "nvme_io_md": false, 00:22:36.069 "write_zeroes": true, 00:22:36.069 "zcopy": false, 00:22:36.069 "get_zone_info": false, 00:22:36.069 "zone_management": false, 00:22:36.069 "zone_append": false, 00:22:36.069 "compare": true, 00:22:36.069 "compare_and_write": true, 00:22:36.069 "abort": true, 00:22:36.069 "seek_hole": false, 00:22:36.069 "seek_data": false, 00:22:36.069 "copy": true, 00:22:36.069 "nvme_iov_md": false 00:22:36.069 }, 00:22:36.069 "memory_domains": [ 00:22:36.069 { 00:22:36.069 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:22:36.069 "dma_device_type": 0 00:22:36.069 } 00:22:36.069 ], 00:22:36.069 "driver_specific": { 00:22:36.069 "nvme": [ 00:22:36.069 { 00:22:36.069 "trid": { 00:22:36.069 "trtype": "RDMA", 00:22:36.069 "adrfam": "IPv4", 00:22:36.069 "traddr": "192.168.100.8", 00:22:36.069 "trsvcid": "4421", 00:22:36.069 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:36.069 }, 00:22:36.069 "ctrlr_data": { 00:22:36.069 "cntlid": 3, 00:22:36.069 "vendor_id": "0x8086", 00:22:36.069 "model_number": "SPDK bdev Controller", 00:22:36.069 "serial_number": "00000000000000000000", 00:22:36.069 "firmware_revision": "24.09", 00:22:36.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:36.069 "oacs": { 00:22:36.069 "security": 0, 00:22:36.069 "format": 0, 00:22:36.069 "firmware": 0, 00:22:36.069 "ns_manage": 0 00:22:36.069 }, 00:22:36.069 "multi_ctrlr": true, 00:22:36.069 "ana_reporting": false 00:22:36.069 }, 00:22:36.069 "vs": { 00:22:36.069 "nvme_version": "1.3" 00:22:36.069 }, 00:22:36.069 "ns_data": { 00:22:36.069 "id": 1, 00:22:36.069 "can_share": true 00:22:36.069 } 00:22:36.069 } 00:22:36.069 ], 00:22:36.069 "mp_policy": "active_passive" 00:22:36.069 } 00:22:36.069 } 00:22:36.069 ] 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.4pYL6zHfwv 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:36.069 rmmod nvme_rdma 00:22:36.069 rmmod nvme_fabrics 00:22:36.069 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3004750 ']' 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3004750 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3004750 ']' 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3004750 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3004750 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:36.329 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3004750' 00:22:36.329 killing process with pid 3004750 00:22:36.330 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3004750 00:22:36.330 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3004750 00:22:36.330 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:36.330 10:30:13 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:36.330 00:22:36.330 real 0m10.023s 00:22:36.330 user 0m4.204s 00:22:36.330 sys 0m6.397s 00:22:36.330 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:36.330 10:30:13 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:36.330 ************************************ 00:22:36.330 END TEST nvmf_async_init 00:22:36.330 ************************************ 00:22:36.592 10:30:13 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:22:36.592 10:30:13 nvmf_rdma -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:22:36.592 10:30:13 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:36.592 10:30:13 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:36.592 10:30:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:36.592 ************************************ 00:22:36.592 START TEST dma 00:22:36.592 ************************************ 00:22:36.592 10:30:13 nvmf_rdma.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:22:36.592 * Looking for test storage... 00:22:36.592 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:36.592 10:30:13 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:36.592 10:30:13 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.592 10:30:13 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.592 10:30:13 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.592 10:30:13 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.592 10:30:13 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.592 10:30:13 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.592 10:30:13 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:22:36.592 10:30:13 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.592 10:30:13 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:22:36.592 10:30:13 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:22:36.592 10:30:13 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:22:36.592 10:30:13 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:22:36.592 10:30:13 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.592 10:30:13 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:36.592 10:30:13 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.592 10:30:13 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.592 10:30:13 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:44.742 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:22:44.743 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:22:44.743 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:22:44.743 Found net devices under 0000:98:00.0: mlx_0_0 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:22:44.743 Found net devices under 0000:98:00.1: mlx_0_1 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:44.743 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:44.743 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:22:44.743 altname enp152s0f0np0 00:22:44.743 altname ens817f0np0 00:22:44.743 inet 192.168.100.8/24 scope global mlx_0_0 00:22:44.743 valid_lft forever preferred_lft forever 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:44.743 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:44.744 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:44.744 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:22:44.744 altname enp152s0f1np1 00:22:44.744 altname ens817f1np1 00:22:44.744 inet 192.168.100.9/24 scope global mlx_0_1 00:22:44.744 valid_lft forever preferred_lft forever 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:44.744 192.168.100.9' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:44.744 192.168.100.9' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:44.744 192.168.100.9' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:44.744 10:30:21 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:44.744 10:30:21 nvmf_rdma.dma -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:44.744 10:30:21 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=3009427 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 3009427 00:22:44.744 10:30:21 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:44.744 10:30:21 nvmf_rdma.dma -- common/autotest_common.sh@829 -- # '[' -z 3009427 ']' 00:22:44.744 10:30:21 nvmf_rdma.dma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.744 10:30:21 nvmf_rdma.dma -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:44.744 10:30:21 nvmf_rdma.dma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.744 10:30:21 nvmf_rdma.dma -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:44.744 10:30:21 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:44.744 [2024-07-15 10:30:21.808461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:44.744 [2024-07-15 10:30:21.808530] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.744 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.744 [2024-07-15 10:30:21.879792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:45.006 [2024-07-15 10:30:21.953310] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.006 [2024-07-15 10:30:21.953350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.006 [2024-07-15 10:30:21.953357] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.006 [2024-07-15 10:30:21.953364] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.006 [2024-07-15 10:30:21.953370] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.006 [2024-07-15 10:30:21.953512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.006 [2024-07-15 10:30:21.953514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.578 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.578 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@862 -- # return 0 00:22:45.578 10:30:22 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:45.578 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:45.578 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:45.578 10:30:22 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.578 10:30:22 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:45.578 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.578 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:45.578 [2024-07-15 10:30:22.649695] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6cfb70/0x6d4060) succeed. 00:22:45.579 [2024-07-15 10:30:22.662036] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6d1070/0x7156f0) succeed. 00:22:45.579 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.579 10:30:22 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:22:45.579 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.579 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:45.840 Malloc0 00:22:45.840 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.840 10:30:22 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:45.840 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.840 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:45.840 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.840 10:30:22 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:22:45.840 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.840 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:45.840 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.840 10:30:22 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:22:45.840 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.840 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:22:45.840 [2024-07-15 10:30:22.818454] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:45.840 10:30:22 nvmf_rdma.dma -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.840 10:30:22 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:22:45.840 10:30:22 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:22:45.840 10:30:22 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:22:45.840 10:30:22 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:22:45.840 10:30:22 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:45.840 10:30:22 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:45.840 { 00:22:45.840 "params": { 00:22:45.840 "name": "Nvme$subsystem", 00:22:45.840 "trtype": "$TEST_TRANSPORT", 00:22:45.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.840 "adrfam": "ipv4", 00:22:45.840 "trsvcid": "$NVMF_PORT", 00:22:45.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.840 "hdgst": ${hdgst:-false}, 00:22:45.840 "ddgst": ${ddgst:-false} 00:22:45.840 }, 00:22:45.840 "method": "bdev_nvme_attach_controller" 00:22:45.840 } 00:22:45.840 EOF 00:22:45.840 )") 00:22:45.840 10:30:22 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:22:45.840 10:30:22 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:22:45.840 10:30:22 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:22:45.840 10:30:22 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:45.840 "params": { 00:22:45.840 "name": "Nvme0", 00:22:45.840 "trtype": "rdma", 00:22:45.840 "traddr": "192.168.100.8", 00:22:45.840 "adrfam": "ipv4", 00:22:45.840 "trsvcid": "4420", 00:22:45.840 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:45.840 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:22:45.840 "hdgst": false, 00:22:45.840 "ddgst": false 00:22:45.840 }, 00:22:45.840 "method": "bdev_nvme_attach_controller" 00:22:45.840 }' 00:22:45.840 [2024-07-15 10:30:22.867716] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:45.840 [2024-07-15 10:30:22.867767] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3009687 ] 00:22:45.840 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.840 [2024-07-15 10:30:22.924736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:45.840 [2024-07-15 10:30:22.978755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.840 [2024-07-15 10:30:22.978755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:51.130 bdev Nvme0n1 reports 1 memory domains 00:22:51.131 bdev Nvme0n1 supports RDMA memory domain 00:22:51.131 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:51.131 ========================================================================== 00:22:51.131 Latency [us] 00:22:51.131 IOPS MiB/s Average min max 00:22:51.131 Core 2: 24245.06 94.71 659.45 289.12 9879.51 00:22:51.131 Core 3: 27486.63 107.37 581.46 235.96 9972.08 00:22:51.131 ========================================================================== 00:22:51.131 Total : 51731.69 202.08 618.01 235.96 9972.08 00:22:51.131 00:22:51.131 Total operations: 258677, translate 258677 pull_push 0 memzero 0 00:22:51.131 10:30:28 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:22:51.131 10:30:28 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:22:51.131 10:30:28 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:22:51.392 [2024-07-15 10:30:28.343340] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:51.392 [2024-07-15 10:30:28.343396] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010805 ] 00:22:51.392 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.392 [2024-07-15 10:30:28.400036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:51.392 [2024-07-15 10:30:28.451872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.392 [2024-07-15 10:30:28.451872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.683 bdev Malloc0 reports 2 memory domains 00:22:56.683 bdev Malloc0 doesn't support RDMA memory domain 00:22:56.683 Initialization complete, running randrw IO for 5 sec on 2 cores 00:22:56.683 ========================================================================== 00:22:56.683 Latency [us] 00:22:56.683 IOPS MiB/s Average min max 00:22:56.683 Core 2: 18917.40 73.90 845.21 308.28 1842.53 00:22:56.683 Core 3: 19001.77 74.23 841.44 301.69 1475.00 00:22:56.683 ========================================================================== 00:22:56.683 Total : 37919.17 148.12 843.32 301.69 1842.53 00:22:56.683 00:22:56.683 Total operations: 189650, translate 0 pull_push 758600 memzero 0 00:22:56.683 10:30:33 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:22:56.683 10:30:33 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:22:56.683 10:30:33 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:22:56.683 10:30:33 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:22:56.683 Ignoring -M option 00:22:56.683 [2024-07-15 10:30:33.701357] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:22:56.683 [2024-07-15 10:30:33.701416] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3011806 ] 00:22:56.683 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.683 [2024-07-15 10:30:33.757991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:56.683 [2024-07-15 10:30:33.809164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.683 [2024-07-15 10:30:33.809164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.016 bdev a4696b3c-26cc-422e-9953-883e14d03fe1 reports 1 memory domains 00:23:02.016 bdev a4696b3c-26cc-422e-9953-883e14d03fe1 supports RDMA memory domain 00:23:02.016 Initialization complete, running randread IO for 5 sec on 2 cores 00:23:02.016 ========================================================================== 00:23:02.016 Latency [us] 00:23:02.016 IOPS MiB/s Average min max 00:23:02.016 Core 2: 126655.39 494.75 125.84 66.30 3482.64 00:23:02.016 Core 3: 133112.55 519.97 119.72 61.96 3560.08 00:23:02.016 ========================================================================== 00:23:02.016 Total : 259767.94 1014.72 122.70 61.96 3560.08 00:23:02.016 00:23:02.016 Total operations: 1298928, translate 0 pull_push 0 memzero 1298928 00:23:02.016 10:30:39 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:23:02.016 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.276 [2024-07-15 10:30:39.284821] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:04.815 Initializing NVMe Controllers 00:23:04.815 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:23:04.815 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:23:04.815 Initialization complete. Launching workers. 00:23:04.815 ======================================================== 00:23:04.815 Latency(us) 00:23:04.815 Device Information : IOPS MiB/s Average min max 00:23:04.815 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.80 5029.42 10932.88 00:23:04.815 ======================================================== 00:23:04.815 Total : 2016.00 7.88 7972.80 5029.42 10932.88 00:23:04.815 00:23:04.815 10:30:41 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:23:04.815 10:30:41 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:23:04.815 10:30:41 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:23:04.815 10:30:41 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:23:04.815 [2024-07-15 10:30:41.654850] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:04.815 [2024-07-15 10:30:41.654901] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013290 ] 00:23:04.815 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.815 [2024-07-15 10:30:41.710069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:04.815 [2024-07-15 10:30:41.763154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.815 [2024-07-15 10:30:41.763155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.098 bdev 7a0cfdd9-c6aa-412f-ae13-15efd90b31dd reports 1 memory domains 00:23:10.098 bdev 7a0cfdd9-c6aa-412f-ae13-15efd90b31dd supports RDMA memory domain 00:23:10.098 Initialization complete, running randrw IO for 5 sec on 2 cores 00:23:10.098 ========================================================================== 00:23:10.098 Latency [us] 00:23:10.098 IOPS MiB/s Average min max 00:23:10.098 Core 2: 21474.49 83.88 744.56 10.54 14203.40 00:23:10.098 Core 3: 27548.45 107.61 580.25 12.97 13831.23 00:23:10.098 ========================================================================== 00:23:10.098 Total : 49022.94 191.50 652.23 10.54 14203.40 00:23:10.098 00:23:10.098 Total operations: 245157, translate 245054 pull_push 0 memzero 103 00:23:10.098 10:30:47 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:10.098 10:30:47 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:10.098 rmmod nvme_rdma 00:23:10.098 rmmod nvme_fabrics 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 3009427 ']' 00:23:10.098 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 3009427 00:23:10.098 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@948 -- # '[' -z 3009427 ']' 00:23:10.098 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # kill -0 3009427 00:23:10.098 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # uname 00:23:10.098 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.098 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3009427 00:23:10.098 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.098 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.098 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3009427' 00:23:10.098 killing process with pid 3009427 00:23:10.098 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@967 -- # kill 3009427 00:23:10.098 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@972 -- # wait 3009427 00:23:10.358 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.358 10:30:47 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:10.358 00:23:10.358 real 0m33.908s 00:23:10.358 user 1m35.616s 00:23:10.358 sys 0m6.892s 00:23:10.358 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:10.358 10:30:47 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:23:10.358 ************************************ 00:23:10.358 END TEST dma 00:23:10.358 ************************************ 00:23:10.358 10:30:47 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:23:10.358 10:30:47 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:10.358 10:30:47 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:10.358 10:30:47 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:10.358 10:30:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:10.619 ************************************ 00:23:10.619 START TEST nvmf_identify 00:23:10.619 ************************************ 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:23:10.619 * Looking for test storage... 00:23:10.619 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:10.619 10:30:47 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:18.761 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.761 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:18.761 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:18.761 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:18.761 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:18.761 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:18.761 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:18.761 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:23:18.762 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:23:18.762 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:23:18.762 Found net devices under 0000:98:00.0: mlx_0_0 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:23:18.762 Found net devices under 0000:98:00.1: mlx_0_1 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:18.762 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:18.762 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:23:18.762 altname enp152s0f0np0 00:23:18.762 altname ens817f0np0 00:23:18.762 inet 192.168.100.8/24 scope global mlx_0_0 00:23:18.762 valid_lft forever preferred_lft forever 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:18.762 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:18.762 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:23:18.762 altname enp152s0f1np1 00:23:18.762 altname ens817f1np1 00:23:18.762 inet 192.168.100.9/24 scope global mlx_0_1 00:23:18.762 valid_lft forever preferred_lft forever 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:18.762 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:18.763 192.168.100.9' 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:18.763 192.168.100.9' 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:18.763 192.168.100.9' 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3018630 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3018630 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3018630 ']' 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.763 10:30:55 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:18.763 [2024-07-15 10:30:55.806697] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:18.763 [2024-07-15 10:30:55.806767] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.763 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.763 [2024-07-15 10:30:55.879069] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.763 [2024-07-15 10:30:55.954638] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.763 [2024-07-15 10:30:55.954680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.763 [2024-07-15 10:30:55.954688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.763 [2024-07-15 10:30:55.954694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.763 [2024-07-15 10:30:55.954700] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.763 [2024-07-15 10:30:55.954845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.763 [2024-07-15 10:30:55.954961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.763 [2024-07-15 10:30:55.955162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.763 [2024-07-15 10:30:55.955163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.706 [2024-07-15 10:30:56.627940] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x752200/0x7566f0) succeed. 00:23:19.706 [2024-07-15 10:30:56.641084] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x753840/0x797d80) succeed. 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.706 Malloc0 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.706 [2024-07-15 10:30:56.853569] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:19.706 [ 00:23:19.706 { 00:23:19.706 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:19.706 "subtype": "Discovery", 00:23:19.706 "listen_addresses": [ 00:23:19.706 { 00:23:19.706 "trtype": "RDMA", 00:23:19.706 "adrfam": "IPv4", 00:23:19.706 "traddr": "192.168.100.8", 00:23:19.706 "trsvcid": "4420" 00:23:19.706 } 00:23:19.706 ], 00:23:19.706 "allow_any_host": true, 00:23:19.706 "hosts": [] 00:23:19.706 }, 00:23:19.706 { 00:23:19.706 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.706 "subtype": "NVMe", 00:23:19.706 "listen_addresses": [ 00:23:19.706 { 00:23:19.706 "trtype": "RDMA", 00:23:19.706 "adrfam": "IPv4", 00:23:19.706 "traddr": "192.168.100.8", 00:23:19.706 "trsvcid": "4420" 00:23:19.706 } 00:23:19.706 ], 00:23:19.706 "allow_any_host": true, 00:23:19.706 "hosts": [], 00:23:19.706 "serial_number": "SPDK00000000000001", 00:23:19.706 "model_number": "SPDK bdev Controller", 00:23:19.706 "max_namespaces": 32, 00:23:19.706 "min_cntlid": 1, 00:23:19.706 "max_cntlid": 65519, 00:23:19.706 "namespaces": [ 00:23:19.706 { 00:23:19.706 "nsid": 1, 00:23:19.706 "bdev_name": "Malloc0", 00:23:19.706 "name": "Malloc0", 00:23:19.706 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:19.706 "eui64": "ABCDEF0123456789", 00:23:19.706 "uuid": "d568c97f-cc0f-4fca-a0f8-a2aa2acc04a1" 00:23:19.706 } 00:23:19.706 ] 00:23:19.706 } 00:23:19.706 ] 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.706 10:30:56 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:19.971 [2024-07-15 10:30:56.915625] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:19.971 [2024-07-15 10:30:56.915668] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018848 ] 00:23:19.971 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.971 [2024-07-15 10:30:56.971267] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:19.971 [2024-07-15 10:30:56.971363] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:19.971 [2024-07-15 10:30:56.971378] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:19.971 [2024-07-15 10:30:56.971382] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:19.971 [2024-07-15 10:30:56.971410] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:19.971 [2024-07-15 10:30:56.988934] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:19.971 [2024-07-15 10:30:57.010438] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:19.971 [2024-07-15 10:30:57.010448] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:19.971 [2024-07-15 10:30:57.010456] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010462] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010467] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010472] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010477] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010482] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010487] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010492] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010497] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010502] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010507] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010512] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010517] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010522] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010527] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010532] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010537] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010542] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010547] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010552] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010561] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010566] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010571] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010576] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010581] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010586] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010591] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010596] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010601] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010606] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010611] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010615] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:19.971 [2024-07-15 10:30:57.010620] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:19.971 [2024-07-15 10:30:57.010624] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:19.971 [2024-07-15 10:30:57.010642] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.971 [2024-07-15 10:30:57.010655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180100 00:23:19.971 [2024-07-15 10:30:57.017235] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.971 [2024-07-15 10:30:57.017244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:19.971 [2024-07-15 10:30:57.017251] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017258] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:19.972 [2024-07-15 10:30:57.017266] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:19.972 [2024-07-15 10:30:57.017272] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:19.972 [2024-07-15 10:30:57.017285] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.972 [2024-07-15 10:30:57.017314] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.972 [2024-07-15 10:30:57.017319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:19.972 [2024-07-15 10:30:57.017325] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:19.972 [2024-07-15 10:30:57.017330] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017336] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:19.972 [2024-07-15 10:30:57.017343] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.972 [2024-07-15 10:30:57.017372] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.972 [2024-07-15 10:30:57.017377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:19.972 [2024-07-15 10:30:57.017384] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:19.972 [2024-07-15 10:30:57.017389] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:19.972 [2024-07-15 10:30:57.017402] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.972 [2024-07-15 10:30:57.017426] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.972 [2024-07-15 10:30:57.017430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:19.972 [2024-07-15 10:30:57.017436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:19.972 [2024-07-15 10:30:57.017441] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017449] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.972 [2024-07-15 10:30:57.017477] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.972 [2024-07-15 10:30:57.017481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:19.972 [2024-07-15 10:30:57.017487] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:19.972 [2024-07-15 10:30:57.017492] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:19.972 [2024-07-15 10:30:57.017496] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017502] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:19.972 [2024-07-15 10:30:57.017607] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:19.972 [2024-07-15 10:30:57.017612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:19.972 [2024-07-15 10:30:57.017621] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.972 [2024-07-15 10:30:57.017650] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.972 [2024-07-15 10:30:57.017655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:19.972 [2024-07-15 10:30:57.017660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:19.972 [2024-07-15 10:30:57.017664] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017676] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.972 [2024-07-15 10:30:57.017707] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.972 [2024-07-15 10:30:57.017712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:19.972 [2024-07-15 10:30:57.017717] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:19.972 [2024-07-15 10:30:57.017722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:19.972 [2024-07-15 10:30:57.017726] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017732] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:19.972 [2024-07-15 10:30:57.017740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:19.972 [2024-07-15 10:30:57.017749] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:23:19.972 [2024-07-15 10:30:57.017795] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.972 [2024-07-15 10:30:57.017799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:19.972 [2024-07-15 10:30:57.017808] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:19.972 [2024-07-15 10:30:57.017813] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:19.972 [2024-07-15 10:30:57.017817] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:19.972 [2024-07-15 10:30:57.017822] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:19.972 [2024-07-15 10:30:57.017827] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:19.972 [2024-07-15 10:30:57.017831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:19.972 [2024-07-15 10:30:57.017836] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:19.972 [2024-07-15 10:30:57.017850] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.972 [2024-07-15 10:30:57.017877] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.972 [2024-07-15 10:30:57.017882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:19.972 [2024-07-15 10:30:57.017890] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.972 [2024-07-15 10:30:57.017905] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.972 [2024-07-15 10:30:57.017917] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.972 [2024-07-15 10:30:57.017928] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.972 [2024-07-15 10:30:57.017939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:19.972 [2024-07-15 10:30:57.017944] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017953] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:19.972 [2024-07-15 10:30:57.017960] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.017967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.972 [2024-07-15 10:30:57.017991] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.972 [2024-07-15 10:30:57.017996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:19.972 [2024-07-15 10:30:57.018001] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:19.972 [2024-07-15 10:30:57.018008] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:19.972 [2024-07-15 10:30:57.018013] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.018022] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.018029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:23:19.972 [2024-07-15 10:30:57.018059] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.972 [2024-07-15 10:30:57.018063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:19.972 [2024-07-15 10:30:57.018069] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:23:19.972 [2024-07-15 10:30:57.018079] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:19.972 [2024-07-15 10:30:57.018101] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:19.973 [2024-07-15 10:30:57.018109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x180100 00:23:19.973 [2024-07-15 10:30:57.018116] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:23:19.973 [2024-07-15 10:30:57.018122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:19.973 [2024-07-15 10:30:57.018144] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.973 [2024-07-15 10:30:57.018151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:19.973 [2024-07-15 10:30:57.018161] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180100 00:23:19.973 [2024-07-15 10:30:57.018168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x180100 00:23:19.973 [2024-07-15 10:30:57.018173] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:23:19.973 [2024-07-15 10:30:57.018178] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.973 [2024-07-15 10:30:57.018182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:19.973 [2024-07-15 10:30:57.018187] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:23:19.973 [2024-07-15 10:30:57.018203] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.973 [2024-07-15 10:30:57.018207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:19.973 [2024-07-15 10:30:57.018217] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:23:19.973 [2024-07-15 10:30:57.018223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x180100 00:23:19.973 [2024-07-15 10:30:57.018228] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:23:19.973 [2024-07-15 10:30:57.018256] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.973 [2024-07-15 10:30:57.018261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:19.973 [2024-07-15 10:30:57.018270] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:23:19.973 ===================================================== 00:23:19.973 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:19.973 ===================================================== 00:23:19.973 Controller Capabilities/Features 00:23:19.973 ================================ 00:23:19.973 Vendor ID: 0000 00:23:19.973 Subsystem Vendor ID: 0000 00:23:19.973 Serial Number: .................... 00:23:19.973 Model Number: ........................................ 00:23:19.973 Firmware Version: 24.09 00:23:19.973 Recommended Arb Burst: 0 00:23:19.973 IEEE OUI Identifier: 00 00 00 00:23:19.973 Multi-path I/O 00:23:19.973 May have multiple subsystem ports: No 00:23:19.973 May have multiple controllers: No 00:23:19.973 Associated with SR-IOV VF: No 00:23:19.973 Max Data Transfer Size: 131072 00:23:19.973 Max Number of Namespaces: 0 00:23:19.973 Max Number of I/O Queues: 1024 00:23:19.973 NVMe Specification Version (VS): 1.3 00:23:19.973 NVMe Specification Version (Identify): 1.3 00:23:19.973 Maximum Queue Entries: 128 00:23:19.973 Contiguous Queues Required: Yes 00:23:19.973 Arbitration Mechanisms Supported 00:23:19.973 Weighted Round Robin: Not Supported 00:23:19.973 Vendor Specific: Not Supported 00:23:19.973 Reset Timeout: 15000 ms 00:23:19.973 Doorbell Stride: 4 bytes 00:23:19.973 NVM Subsystem Reset: Not Supported 00:23:19.973 Command Sets Supported 00:23:19.973 NVM Command Set: Supported 00:23:19.973 Boot Partition: Not Supported 00:23:19.973 Memory Page Size Minimum: 4096 bytes 00:23:19.973 Memory Page Size Maximum: 4096 bytes 00:23:19.973 Persistent Memory Region: Not Supported 00:23:19.973 Optional Asynchronous Events Supported 00:23:19.973 Namespace Attribute Notices: Not Supported 00:23:19.973 Firmware Activation Notices: Not Supported 00:23:19.973 ANA Change Notices: Not Supported 00:23:19.973 PLE Aggregate Log Change Notices: Not Supported 00:23:19.973 LBA Status Info Alert Notices: Not Supported 00:23:19.973 EGE Aggregate Log Change Notices: Not Supported 00:23:19.973 Normal NVM Subsystem Shutdown event: Not Supported 00:23:19.973 Zone Descriptor Change Notices: Not Supported 00:23:19.973 Discovery Log Change Notices: Supported 00:23:19.973 Controller Attributes 00:23:19.973 128-bit Host Identifier: Not Supported 00:23:19.973 Non-Operational Permissive Mode: Not Supported 00:23:19.973 NVM Sets: Not Supported 00:23:19.973 Read Recovery Levels: Not Supported 00:23:19.973 Endurance Groups: Not Supported 00:23:19.973 Predictable Latency Mode: Not Supported 00:23:19.973 Traffic Based Keep ALive: Not Supported 00:23:19.973 Namespace Granularity: Not Supported 00:23:19.973 SQ Associations: Not Supported 00:23:19.973 UUID List: Not Supported 00:23:19.973 Multi-Domain Subsystem: Not Supported 00:23:19.973 Fixed Capacity Management: Not Supported 00:23:19.973 Variable Capacity Management: Not Supported 00:23:19.973 Delete Endurance Group: Not Supported 00:23:19.973 Delete NVM Set: Not Supported 00:23:19.973 Extended LBA Formats Supported: Not Supported 00:23:19.973 Flexible Data Placement Supported: Not Supported 00:23:19.973 00:23:19.973 Controller Memory Buffer Support 00:23:19.973 ================================ 00:23:19.973 Supported: No 00:23:19.973 00:23:19.973 Persistent Memory Region Support 00:23:19.973 ================================ 00:23:19.973 Supported: No 00:23:19.973 00:23:19.973 Admin Command Set Attributes 00:23:19.973 ============================ 00:23:19.973 Security Send/Receive: Not Supported 00:23:19.973 Format NVM: Not Supported 00:23:19.973 Firmware Activate/Download: Not Supported 00:23:19.973 Namespace Management: Not Supported 00:23:19.973 Device Self-Test: Not Supported 00:23:19.973 Directives: Not Supported 00:23:19.973 NVMe-MI: Not Supported 00:23:19.973 Virtualization Management: Not Supported 00:23:19.973 Doorbell Buffer Config: Not Supported 00:23:19.973 Get LBA Status Capability: Not Supported 00:23:19.973 Command & Feature Lockdown Capability: Not Supported 00:23:19.973 Abort Command Limit: 1 00:23:19.973 Async Event Request Limit: 4 00:23:19.973 Number of Firmware Slots: N/A 00:23:19.973 Firmware Slot 1 Read-Only: N/A 00:23:19.973 Firmware Activation Without Reset: N/A 00:23:19.973 Multiple Update Detection Support: N/A 00:23:19.973 Firmware Update Granularity: No Information Provided 00:23:19.973 Per-Namespace SMART Log: No 00:23:19.973 Asymmetric Namespace Access Log Page: Not Supported 00:23:19.973 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:19.973 Command Effects Log Page: Not Supported 00:23:19.973 Get Log Page Extended Data: Supported 00:23:19.973 Telemetry Log Pages: Not Supported 00:23:19.973 Persistent Event Log Pages: Not Supported 00:23:19.973 Supported Log Pages Log Page: May Support 00:23:19.973 Commands Supported & Effects Log Page: Not Supported 00:23:19.973 Feature Identifiers & Effects Log Page:May Support 00:23:19.973 NVMe-MI Commands & Effects Log Page: May Support 00:23:19.973 Data Area 4 for Telemetry Log: Not Supported 00:23:19.973 Error Log Page Entries Supported: 128 00:23:19.973 Keep Alive: Not Supported 00:23:19.973 00:23:19.973 NVM Command Set Attributes 00:23:19.973 ========================== 00:23:19.973 Submission Queue Entry Size 00:23:19.973 Max: 1 00:23:19.973 Min: 1 00:23:19.973 Completion Queue Entry Size 00:23:19.973 Max: 1 00:23:19.973 Min: 1 00:23:19.973 Number of Namespaces: 0 00:23:19.973 Compare Command: Not Supported 00:23:19.973 Write Uncorrectable Command: Not Supported 00:23:19.973 Dataset Management Command: Not Supported 00:23:19.973 Write Zeroes Command: Not Supported 00:23:19.973 Set Features Save Field: Not Supported 00:23:19.973 Reservations: Not Supported 00:23:19.973 Timestamp: Not Supported 00:23:19.973 Copy: Not Supported 00:23:19.973 Volatile Write Cache: Not Present 00:23:19.973 Atomic Write Unit (Normal): 1 00:23:19.973 Atomic Write Unit (PFail): 1 00:23:19.973 Atomic Compare & Write Unit: 1 00:23:19.973 Fused Compare & Write: Supported 00:23:19.973 Scatter-Gather List 00:23:19.973 SGL Command Set: Supported 00:23:19.973 SGL Keyed: Supported 00:23:19.973 SGL Bit Bucket Descriptor: Not Supported 00:23:19.973 SGL Metadata Pointer: Not Supported 00:23:19.973 Oversized SGL: Not Supported 00:23:19.973 SGL Metadata Address: Not Supported 00:23:19.973 SGL Offset: Supported 00:23:19.973 Transport SGL Data Block: Not Supported 00:23:19.973 Replay Protected Memory Block: Not Supported 00:23:19.973 00:23:19.973 Firmware Slot Information 00:23:19.973 ========================= 00:23:19.973 Active slot: 0 00:23:19.973 00:23:19.973 00:23:19.973 Error Log 00:23:19.973 ========= 00:23:19.973 00:23:19.973 Active Namespaces 00:23:19.973 ================= 00:23:19.973 Discovery Log Page 00:23:19.973 ================== 00:23:19.973 Generation Counter: 2 00:23:19.973 Number of Records: 2 00:23:19.973 Record Format: 0 00:23:19.973 00:23:19.973 Discovery Log Entry 0 00:23:19.973 ---------------------- 00:23:19.973 Transport Type: 1 (RDMA) 00:23:19.973 Address Family: 1 (IPv4) 00:23:19.973 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:19.973 Entry Flags: 00:23:19.973 Duplicate Returned Information: 1 00:23:19.973 Explicit Persistent Connection Support for Discovery: 1 00:23:19.973 Transport Requirements: 00:23:19.973 Secure Channel: Not Required 00:23:19.973 Port ID: 0 (0x0000) 00:23:19.973 Controller ID: 65535 (0xffff) 00:23:19.973 Admin Max SQ Size: 128 00:23:19.974 Transport Service Identifier: 4420 00:23:19.974 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:19.974 Transport Address: 192.168.100.8 00:23:19.974 Transport Specific Address Subtype - RDMA 00:23:19.974 RDMA QP Service Type: 1 (Reliable Connected) 00:23:19.974 RDMA Provider Type: 1 (No provider specified) 00:23:19.974 RDMA CM Service: 1 (RDMA_CM) 00:23:19.974 Discovery Log Entry 1 00:23:19.974 ---------------------- 00:23:19.974 Transport Type: 1 (RDMA) 00:23:19.974 Address Family: 1 (IPv4) 00:23:19.974 Subsystem Type: 2 (NVM Subsystem) 00:23:19.974 Entry Flags: 00:23:19.974 Duplicate Returned Information: 0 00:23:19.974 Explicit Persistent Connection Support for Discovery: 0 00:23:19.974 Transport Requirements: 00:23:19.974 Secure Channel: Not Required 00:23:19.974 Port ID: 0 (0x0000) 00:23:19.974 Controller ID: 65535 (0xffff) 00:23:19.974 Admin Max SQ Size: [2024-07-15 10:30:57.018348] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:19.974 [2024-07-15 10:30:57.018357] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58117 doesn't match qid 00:23:19.974 [2024-07-15 10:30:57.018370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32607 cdw0:5 sqhd:8ad0 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018376] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58117 doesn't match qid 00:23:19.974 [2024-07-15 10:30:57.018382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32607 cdw0:5 sqhd:8ad0 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018388] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58117 doesn't match qid 00:23:19.974 [2024-07-15 10:30:57.018394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32607 cdw0:5 sqhd:8ad0 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018399] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 58117 doesn't match qid 00:23:19.974 [2024-07-15 10:30:57.018405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32607 cdw0:5 sqhd:8ad0 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018414] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018440] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018453] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018466] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018484] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018495] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:19.974 [2024-07-15 10:30:57.018500] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:19.974 [2024-07-15 10:30:57.018504] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018512] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018539] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018549] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018557] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018584] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018594] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018603] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018631] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018640] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018649] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018680] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018690] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018699] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018729] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018740] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018749] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018775] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018785] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018794] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018819] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018829] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018838] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018866] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018876] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018885] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018914] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018924] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018933] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.018958] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.018963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.018968] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018977] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.018983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.019008] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.019013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.019018] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.019026] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.019033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.019054] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.974 [2024-07-15 10:30:57.019059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:19.974 [2024-07-15 10:30:57.019064] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.019072] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.974 [2024-07-15 10:30:57.019079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.974 [2024-07-15 10:30:57.019102] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019112] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019120] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019150] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019160] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019169] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019196] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019206] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019215] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019245] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019256] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019264] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019294] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019304] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019312] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019340] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019350] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019358] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019388] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019398] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019406] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019434] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019444] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019452] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019478] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019488] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019496] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019526] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019536] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019544] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019574] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019584] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019592] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019620] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019630] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019638] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019668] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019678] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019686] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019712] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019722] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019730] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.975 [2024-07-15 10:30:57.019758] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.975 [2024-07-15 10:30:57.019763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:19.975 [2024-07-15 10:30:57.019768] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019776] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.975 [2024-07-15 10:30:57.019783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.019806] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.019811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.019816] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.019826] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.019833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.019852] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.019856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.019862] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.019870] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.019877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.019902] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.019907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.019912] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.019920] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.019927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.019948] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.019953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.019958] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.019966] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.019973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.019994] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.019999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020004] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020012] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020038] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020048] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020056] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020084] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020094] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020104] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020138] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020147] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020156] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020184] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020193] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020202] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020232] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020243] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020251] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020283] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020293] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020301] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020327] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020337] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020345] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020371] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020382] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020390] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020416] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020426] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020435] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020464] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020474] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020483] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020508] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020518] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020526] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020558] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020568] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020576] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020609] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020618] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020627] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.976 [2024-07-15 10:30:57.020656] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.976 [2024-07-15 10:30:57.020661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:19.976 [2024-07-15 10:30:57.020668] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020676] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.976 [2024-07-15 10:30:57.020683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.020700] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.020704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.020710] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020718] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.020744] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.020748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.020754] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020762] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.020788] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.020792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.020797] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020806] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.020836] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.020840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.020845] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020854] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.020880] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.020884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.020889] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020898] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.020928] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.020933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.020939] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020947] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.020977] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.020982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.020987] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.020995] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.021002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.021021] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.021026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.021031] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.021039] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.021046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.021069] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.021074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.021079] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.021087] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.021094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.021119] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.021124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.021129] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.021138] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.021144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.021165] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.021170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.021175] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.021184] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.021190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.021209] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.021214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.021219] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.021227] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.025249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:19.977 [2024-07-15 10:30:57.025273] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:19.977 [2024-07-15 10:30:57.025278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:23:19.977 [2024-07-15 10:30:57.025283] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:23:19.977 [2024-07-15 10:30:57.025289] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:23:19.977 128 00:23:19.977 Transport Service Identifier: 4420 00:23:19.977 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:19.977 Transport Address: 192.168.100.8 00:23:19.977 Transport Specific Address Subtype - RDMA 00:23:19.977 RDMA QP Service Type: 1 (Reliable Connected) 00:23:19.977 RDMA Provider Type: 1 (No provider specified) 00:23:19.977 RDMA CM Service: 1 (RDMA_CM) 00:23:19.977 10:30:57 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:19.977 [2024-07-15 10:30:57.112696] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:19.977 [2024-07-15 10:30:57.112754] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018850 ] 00:23:19.977 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.243 [2024-07-15 10:30:57.166643] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:20.243 [2024-07-15 10:30:57.166726] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:23:20.243 [2024-07-15 10:30:57.166740] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:23:20.243 [2024-07-15 10:30:57.166744] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:23:20.243 [2024-07-15 10:30:57.166769] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:20.243 [2024-07-15 10:30:57.177260] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:23:20.243 [2024-07-15 10:30:57.199114] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:20.243 [2024-07-15 10:30:57.199123] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:23:20.243 [2024-07-15 10:30:57.199131] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199137] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199143] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199149] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199158] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199163] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199168] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199173] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199179] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199185] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199190] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199195] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199200] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199205] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199210] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199217] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199223] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199228] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199236] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199241] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199246] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199251] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199256] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199261] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199266] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199271] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199278] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199283] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199288] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199293] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199298] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199302] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:23:20.243 [2024-07-15 10:30:57.199307] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:23:20.243 [2024-07-15 10:30:57.199311] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:23:20.243 [2024-07-15 10:30:57.199326] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.199337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180100 00:23:20.243 [2024-07-15 10:30:57.204920] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.243 [2024-07-15 10:30:57.204930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:20.243 [2024-07-15 10:30:57.204936] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.204943] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:20.243 [2024-07-15 10:30:57.204950] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:20.243 [2024-07-15 10:30:57.204956] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:20.243 [2024-07-15 10:30:57.204968] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.204977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.243 [2024-07-15 10:30:57.204992] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.243 [2024-07-15 10:30:57.204998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:23:20.243 [2024-07-15 10:30:57.205004] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:20.243 [2024-07-15 10:30:57.205010] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.205017] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:20.243 [2024-07-15 10:30:57.205025] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.205033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.243 [2024-07-15 10:30:57.205049] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.243 [2024-07-15 10:30:57.205055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:23:20.243 [2024-07-15 10:30:57.205062] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:20.243 [2024-07-15 10:30:57.205067] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.205073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:20.243 [2024-07-15 10:30:57.205080] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.205087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.243 [2024-07-15 10:30:57.205104] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.243 [2024-07-15 10:30:57.205110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:20.243 [2024-07-15 10:30:57.205115] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:20.243 [2024-07-15 10:30:57.205120] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.205128] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.243 [2024-07-15 10:30:57.205135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.243 [2024-07-15 10:30:57.205191] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.243 [2024-07-15 10:30:57.205197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:20.244 [2024-07-15 10:30:57.205202] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:20.244 [2024-07-15 10:30:57.205207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:20.244 [2024-07-15 10:30:57.205212] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205218] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:20.244 [2024-07-15 10:30:57.205323] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:20.244 [2024-07-15 10:30:57.205327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:20.244 [2024-07-15 10:30:57.205335] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.244 [2024-07-15 10:30:57.205357] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.244 [2024-07-15 10:30:57.205361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:20.244 [2024-07-15 10:30:57.205366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:20.244 [2024-07-15 10:30:57.205371] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205379] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.244 [2024-07-15 10:30:57.205401] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.244 [2024-07-15 10:30:57.205406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:20.244 [2024-07-15 10:30:57.205411] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:20.244 [2024-07-15 10:30:57.205415] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205420] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205426] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:20.244 [2024-07-15 10:30:57.205435] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205444] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:23:20.244 [2024-07-15 10:30:57.205485] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.244 [2024-07-15 10:30:57.205490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:20.244 [2024-07-15 10:30:57.205496] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:20.244 [2024-07-15 10:30:57.205503] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:20.244 [2024-07-15 10:30:57.205507] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:20.244 [2024-07-15 10:30:57.205511] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:20.244 [2024-07-15 10:30:57.205516] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:20.244 [2024-07-15 10:30:57.205521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205525] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205538] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205545] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.244 [2024-07-15 10:30:57.205559] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.244 [2024-07-15 10:30:57.205563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:20.244 [2024-07-15 10:30:57.205571] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.244 [2024-07-15 10:30:57.205583] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.244 [2024-07-15 10:30:57.205595] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.244 [2024-07-15 10:30:57.205608] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.244 [2024-07-15 10:30:57.205618] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205623] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205633] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205639] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.244 [2024-07-15 10:30:57.205662] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.244 [2024-07-15 10:30:57.205667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:23:20.244 [2024-07-15 10:30:57.205673] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:20.244 [2024-07-15 10:30:57.205682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205687] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205693] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205699] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205706] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205713] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.244 [2024-07-15 10:30:57.205729] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.244 [2024-07-15 10:30:57.205733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:23:20.244 [2024-07-15 10:30:57.205795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205800] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205817] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x180100 00:23:20.244 [2024-07-15 10:30:57.205845] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.244 [2024-07-15 10:30:57.205850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:20.244 [2024-07-15 10:30:57.205862] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:20.244 [2024-07-15 10:30:57.205871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205877] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205892] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:23:20.244 [2024-07-15 10:30:57.205920] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.244 [2024-07-15 10:30:57.205925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:20.244 [2024-07-15 10:30:57.205937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205943] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.205959] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.205967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:23:20.244 [2024-07-15 10:30:57.205987] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.244 [2024-07-15 10:30:57.205992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:20.244 [2024-07-15 10:30:57.206001] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.206006] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:23:20.244 [2024-07-15 10:30:57.206012] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.206019] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:20.244 [2024-07-15 10:30:57.206027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:20.245 [2024-07-15 10:30:57.206032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:20.245 [2024-07-15 10:30:57.206038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:20.245 [2024-07-15 10:30:57.206043] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:20.245 [2024-07-15 10:30:57.206047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:20.245 [2024-07-15 10:30:57.206052] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:20.245 [2024-07-15 10:30:57.206065] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.245 [2024-07-15 10:30:57.206079] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:20.245 [2024-07-15 10:30:57.206094] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.245 [2024-07-15 10:30:57.206098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:20.245 [2024-07-15 10:30:57.206104] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206109] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.245 [2024-07-15 10:30:57.206113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:20.245 [2024-07-15 10:30:57.206119] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206127] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.245 [2024-07-15 10:30:57.206147] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.245 [2024-07-15 10:30:57.206155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:20.245 [2024-07-15 10:30:57.206162] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206171] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.245 [2024-07-15 10:30:57.206193] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.245 [2024-07-15 10:30:57.206198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:20.245 [2024-07-15 10:30:57.206203] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206211] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.245 [2024-07-15 10:30:57.206235] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.245 [2024-07-15 10:30:57.206240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:23:20.245 [2024-07-15 10:30:57.206245] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206257] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x180100 00:23:20.245 [2024-07-15 10:30:57.206272] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x180100 00:23:20.245 [2024-07-15 10:30:57.206287] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x180100 00:23:20.245 [2024-07-15 10:30:57.206302] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x180100 00:23:20.245 [2024-07-15 10:30:57.206316] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.245 [2024-07-15 10:30:57.206321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:20.245 [2024-07-15 10:30:57.206332] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206337] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.245 [2024-07-15 10:30:57.206342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:20.245 [2024-07-15 10:30:57.206350] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206356] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.245 [2024-07-15 10:30:57.206360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:20.245 [2024-07-15 10:30:57.206367] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:23:20.245 [2024-07-15 10:30:57.206372] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.245 [2024-07-15 10:30:57.206377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:20.245 [2024-07-15 10:30:57.206385] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:23:20.245 ===================================================== 00:23:20.245 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:20.245 ===================================================== 00:23:20.245 Controller Capabilities/Features 00:23:20.245 ================================ 00:23:20.245 Vendor ID: 8086 00:23:20.245 Subsystem Vendor ID: 8086 00:23:20.245 Serial Number: SPDK00000000000001 00:23:20.245 Model Number: SPDK bdev Controller 00:23:20.245 Firmware Version: 24.09 00:23:20.245 Recommended Arb Burst: 6 00:23:20.245 IEEE OUI Identifier: e4 d2 5c 00:23:20.245 Multi-path I/O 00:23:20.245 May have multiple subsystem ports: Yes 00:23:20.245 May have multiple controllers: Yes 00:23:20.245 Associated with SR-IOV VF: No 00:23:20.245 Max Data Transfer Size: 131072 00:23:20.245 Max Number of Namespaces: 32 00:23:20.245 Max Number of I/O Queues: 127 00:23:20.245 NVMe Specification Version (VS): 1.3 00:23:20.245 NVMe Specification Version (Identify): 1.3 00:23:20.245 Maximum Queue Entries: 128 00:23:20.245 Contiguous Queues Required: Yes 00:23:20.245 Arbitration Mechanisms Supported 00:23:20.245 Weighted Round Robin: Not Supported 00:23:20.245 Vendor Specific: Not Supported 00:23:20.245 Reset Timeout: 15000 ms 00:23:20.245 Doorbell Stride: 4 bytes 00:23:20.245 NVM Subsystem Reset: Not Supported 00:23:20.245 Command Sets Supported 00:23:20.245 NVM Command Set: Supported 00:23:20.245 Boot Partition: Not Supported 00:23:20.245 Memory Page Size Minimum: 4096 bytes 00:23:20.245 Memory Page Size Maximum: 4096 bytes 00:23:20.245 Persistent Memory Region: Not Supported 00:23:20.245 Optional Asynchronous Events Supported 00:23:20.245 Namespace Attribute Notices: Supported 00:23:20.245 Firmware Activation Notices: Not Supported 00:23:20.245 ANA Change Notices: Not Supported 00:23:20.245 PLE Aggregate Log Change Notices: Not Supported 00:23:20.245 LBA Status Info Alert Notices: Not Supported 00:23:20.245 EGE Aggregate Log Change Notices: Not Supported 00:23:20.245 Normal NVM Subsystem Shutdown event: Not Supported 00:23:20.245 Zone Descriptor Change Notices: Not Supported 00:23:20.245 Discovery Log Change Notices: Not Supported 00:23:20.245 Controller Attributes 00:23:20.245 128-bit Host Identifier: Supported 00:23:20.245 Non-Operational Permissive Mode: Not Supported 00:23:20.245 NVM Sets: Not Supported 00:23:20.245 Read Recovery Levels: Not Supported 00:23:20.245 Endurance Groups: Not Supported 00:23:20.245 Predictable Latency Mode: Not Supported 00:23:20.245 Traffic Based Keep ALive: Not Supported 00:23:20.245 Namespace Granularity: Not Supported 00:23:20.245 SQ Associations: Not Supported 00:23:20.245 UUID List: Not Supported 00:23:20.245 Multi-Domain Subsystem: Not Supported 00:23:20.245 Fixed Capacity Management: Not Supported 00:23:20.245 Variable Capacity Management: Not Supported 00:23:20.245 Delete Endurance Group: Not Supported 00:23:20.245 Delete NVM Set: Not Supported 00:23:20.245 Extended LBA Formats Supported: Not Supported 00:23:20.245 Flexible Data Placement Supported: Not Supported 00:23:20.245 00:23:20.245 Controller Memory Buffer Support 00:23:20.245 ================================ 00:23:20.245 Supported: No 00:23:20.245 00:23:20.245 Persistent Memory Region Support 00:23:20.245 ================================ 00:23:20.245 Supported: No 00:23:20.245 00:23:20.245 Admin Command Set Attributes 00:23:20.245 ============================ 00:23:20.245 Security Send/Receive: Not Supported 00:23:20.245 Format NVM: Not Supported 00:23:20.245 Firmware Activate/Download: Not Supported 00:23:20.245 Namespace Management: Not Supported 00:23:20.245 Device Self-Test: Not Supported 00:23:20.245 Directives: Not Supported 00:23:20.245 NVMe-MI: Not Supported 00:23:20.245 Virtualization Management: Not Supported 00:23:20.245 Doorbell Buffer Config: Not Supported 00:23:20.245 Get LBA Status Capability: Not Supported 00:23:20.245 Command & Feature Lockdown Capability: Not Supported 00:23:20.245 Abort Command Limit: 4 00:23:20.245 Async Event Request Limit: 4 00:23:20.245 Number of Firmware Slots: N/A 00:23:20.245 Firmware Slot 1 Read-Only: N/A 00:23:20.245 Firmware Activation Without Reset: N/A 00:23:20.245 Multiple Update Detection Support: N/A 00:23:20.245 Firmware Update Granularity: No Information Provided 00:23:20.245 Per-Namespace SMART Log: No 00:23:20.245 Asymmetric Namespace Access Log Page: Not Supported 00:23:20.246 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:20.246 Command Effects Log Page: Supported 00:23:20.246 Get Log Page Extended Data: Supported 00:23:20.246 Telemetry Log Pages: Not Supported 00:23:20.246 Persistent Event Log Pages: Not Supported 00:23:20.246 Supported Log Pages Log Page: May Support 00:23:20.246 Commands Supported & Effects Log Page: Not Supported 00:23:20.246 Feature Identifiers & Effects Log Page:May Support 00:23:20.246 NVMe-MI Commands & Effects Log Page: May Support 00:23:20.246 Data Area 4 for Telemetry Log: Not Supported 00:23:20.246 Error Log Page Entries Supported: 128 00:23:20.246 Keep Alive: Supported 00:23:20.246 Keep Alive Granularity: 10000 ms 00:23:20.246 00:23:20.246 NVM Command Set Attributes 00:23:20.246 ========================== 00:23:20.246 Submission Queue Entry Size 00:23:20.246 Max: 64 00:23:20.246 Min: 64 00:23:20.246 Completion Queue Entry Size 00:23:20.246 Max: 16 00:23:20.246 Min: 16 00:23:20.246 Number of Namespaces: 32 00:23:20.246 Compare Command: Supported 00:23:20.246 Write Uncorrectable Command: Not Supported 00:23:20.246 Dataset Management Command: Supported 00:23:20.246 Write Zeroes Command: Supported 00:23:20.246 Set Features Save Field: Not Supported 00:23:20.246 Reservations: Supported 00:23:20.246 Timestamp: Not Supported 00:23:20.246 Copy: Supported 00:23:20.246 Volatile Write Cache: Present 00:23:20.246 Atomic Write Unit (Normal): 1 00:23:20.246 Atomic Write Unit (PFail): 1 00:23:20.246 Atomic Compare & Write Unit: 1 00:23:20.246 Fused Compare & Write: Supported 00:23:20.246 Scatter-Gather List 00:23:20.246 SGL Command Set: Supported 00:23:20.246 SGL Keyed: Supported 00:23:20.246 SGL Bit Bucket Descriptor: Not Supported 00:23:20.246 SGL Metadata Pointer: Not Supported 00:23:20.246 Oversized SGL: Not Supported 00:23:20.246 SGL Metadata Address: Not Supported 00:23:20.246 SGL Offset: Supported 00:23:20.246 Transport SGL Data Block: Not Supported 00:23:20.246 Replay Protected Memory Block: Not Supported 00:23:20.246 00:23:20.246 Firmware Slot Information 00:23:20.246 ========================= 00:23:20.246 Active slot: 1 00:23:20.246 Slot 1 Firmware Revision: 24.09 00:23:20.246 00:23:20.246 00:23:20.246 Commands Supported and Effects 00:23:20.246 ============================== 00:23:20.246 Admin Commands 00:23:20.246 -------------- 00:23:20.246 Get Log Page (02h): Supported 00:23:20.246 Identify (06h): Supported 00:23:20.246 Abort (08h): Supported 00:23:20.246 Set Features (09h): Supported 00:23:20.246 Get Features (0Ah): Supported 00:23:20.246 Asynchronous Event Request (0Ch): Supported 00:23:20.246 Keep Alive (18h): Supported 00:23:20.246 I/O Commands 00:23:20.246 ------------ 00:23:20.246 Flush (00h): Supported LBA-Change 00:23:20.246 Write (01h): Supported LBA-Change 00:23:20.246 Read (02h): Supported 00:23:20.246 Compare (05h): Supported 00:23:20.246 Write Zeroes (08h): Supported LBA-Change 00:23:20.246 Dataset Management (09h): Supported LBA-Change 00:23:20.246 Copy (19h): Supported LBA-Change 00:23:20.246 00:23:20.246 Error Log 00:23:20.246 ========= 00:23:20.246 00:23:20.246 Arbitration 00:23:20.246 =========== 00:23:20.246 Arbitration Burst: 1 00:23:20.246 00:23:20.246 Power Management 00:23:20.246 ================ 00:23:20.246 Number of Power States: 1 00:23:20.246 Current Power State: Power State #0 00:23:20.246 Power State #0: 00:23:20.246 Max Power: 0.00 W 00:23:20.246 Non-Operational State: Operational 00:23:20.246 Entry Latency: Not Reported 00:23:20.246 Exit Latency: Not Reported 00:23:20.246 Relative Read Throughput: 0 00:23:20.246 Relative Read Latency: 0 00:23:20.246 Relative Write Throughput: 0 00:23:20.246 Relative Write Latency: 0 00:23:20.246 Idle Power: Not Reported 00:23:20.246 Active Power: Not Reported 00:23:20.246 Non-Operational Permissive Mode: Not Supported 00:23:20.246 00:23:20.246 Health Information 00:23:20.246 ================== 00:23:20.246 Critical Warnings: 00:23:20.246 Available Spare Space: OK 00:23:20.246 Temperature: OK 00:23:20.246 Device Reliability: OK 00:23:20.246 Read Only: No 00:23:20.246 Volatile Memory Backup: OK 00:23:20.246 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:20.246 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:20.246 Available Spare: 0% 00:23:20.246 Available Spare Threshold: 0% 00:23:20.246 Life Percentage [2024-07-15 10:30:57.206479] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.246 [2024-07-15 10:30:57.206502] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.246 [2024-07-15 10:30:57.206506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206511] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206538] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:20.246 [2024-07-15 10:30:57.206546] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50247 doesn't match qid 00:23:20.246 [2024-07-15 10:30:57.206559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32654 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206564] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50247 doesn't match qid 00:23:20.246 [2024-07-15 10:30:57.206571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32654 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206576] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50247 doesn't match qid 00:23:20.246 [2024-07-15 10:30:57.206582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32654 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206588] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 50247 doesn't match qid 00:23:20.246 [2024-07-15 10:30:57.206594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32654 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206602] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.246 [2024-07-15 10:30:57.206623] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.246 [2024-07-15 10:30:57.206628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206635] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.246 [2024-07-15 10:30:57.206647] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206663] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.246 [2024-07-15 10:30:57.206668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206673] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:20.246 [2024-07-15 10:30:57.206677] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:20.246 [2024-07-15 10:30:57.206682] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206692] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.246 [2024-07-15 10:30:57.206712] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.246 [2024-07-15 10:30:57.206717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206722] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206731] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.246 [2024-07-15 10:30:57.206751] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.246 [2024-07-15 10:30:57.206756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206761] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206770] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.246 [2024-07-15 10:30:57.206791] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.246 [2024-07-15 10:30:57.206796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206801] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206810] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.246 [2024-07-15 10:30:57.206832] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.246 [2024-07-15 10:30:57.206837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206842] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206851] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.246 [2024-07-15 10:30:57.206858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.246 [2024-07-15 10:30:57.206877] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.246 [2024-07-15 10:30:57.206882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:20.246 [2024-07-15 10:30:57.206888] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.206897] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.206904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.206918] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.206923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.206928] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.206940] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.206948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.206963] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.206967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.206973] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.206981] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.206988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207003] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207013] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207022] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207042] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207052] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207060] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207080] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207090] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207099] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207119] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207129] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207138] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207160] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207171] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207179] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207200] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207210] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207218] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207241] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207251] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207260] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207282] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207292] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207300] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207324] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207334] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207343] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207365] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207374] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207383] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207409] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207420] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207428] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207448] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207458] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207466] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.247 [2024-07-15 10:30:57.207488] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.247 [2024-07-15 10:30:57.207493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:23:20.247 [2024-07-15 10:30:57.207498] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:23:20.247 [2024-07-15 10:30:57.207506] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207530] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207540] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207548] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207570] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207580] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207588] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207610] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207620] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207628] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207648] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207660] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207668] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207692] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207702] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207710] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207732] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207742] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207751] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207770] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207780] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207789] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207813] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207822] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207831] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207853] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207863] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207871] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207893] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207904] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207912] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207932] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207942] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207950] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.207970] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.207975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.207980] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207988] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.207995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.208010] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.208015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.208020] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.208028] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.208035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.208050] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.208055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.208060] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.208068] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.208075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.208092] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.208097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.208102] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.208110] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.208117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.208130] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.208135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.208140] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.208148] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.208155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.208170] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.208175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.208180] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.208188] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.208195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.208210] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.208215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.208220] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.208228] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.212243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:23:20.248 [2024-07-15 10:30:57.212261] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:23:20.248 [2024-07-15 10:30:57.212265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0001 p:0 m:0 dnr:0 00:23:20.248 [2024-07-15 10:30:57.212271] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:23:20.248 [2024-07-15 10:30:57.212276] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:20.248 Used: 0% 00:23:20.248 Data Units Read: 0 00:23:20.248 Data Units Written: 0 00:23:20.248 Host Read Commands: 0 00:23:20.248 Host Write Commands: 0 00:23:20.248 Controller Busy Time: 0 minutes 00:23:20.249 Power Cycles: 0 00:23:20.249 Power On Hours: 0 hours 00:23:20.249 Unsafe Shutdowns: 0 00:23:20.249 Unrecoverable Media Errors: 0 00:23:20.249 Lifetime Error Log Entries: 0 00:23:20.249 Warning Temperature Time: 0 minutes 00:23:20.249 Critical Temperature Time: 0 minutes 00:23:20.249 00:23:20.249 Number of Queues 00:23:20.249 ================ 00:23:20.249 Number of I/O Submission Queues: 127 00:23:20.249 Number of I/O Completion Queues: 127 00:23:20.249 00:23:20.249 Active Namespaces 00:23:20.249 ================= 00:23:20.249 Namespace ID:1 00:23:20.249 Error Recovery Timeout: Unlimited 00:23:20.249 Command Set Identifier: NVM (00h) 00:23:20.249 Deallocate: Supported 00:23:20.249 Deallocated/Unwritten Error: Not Supported 00:23:20.249 Deallocated Read Value: Unknown 00:23:20.249 Deallocate in Write Zeroes: Not Supported 00:23:20.249 Deallocated Guard Field: 0xFFFF 00:23:20.249 Flush: Supported 00:23:20.249 Reservation: Supported 00:23:20.249 Namespace Sharing Capabilities: Multiple Controllers 00:23:20.249 Size (in LBAs): 131072 (0GiB) 00:23:20.249 Capacity (in LBAs): 131072 (0GiB) 00:23:20.249 Utilization (in LBAs): 131072 (0GiB) 00:23:20.249 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:20.249 EUI64: ABCDEF0123456789 00:23:20.249 UUID: d568c97f-cc0f-4fca-a0f8-a2aa2acc04a1 00:23:20.249 Thin Provisioning: Not Supported 00:23:20.249 Per-NS Atomic Units: Yes 00:23:20.249 Atomic Boundary Size (Normal): 0 00:23:20.249 Atomic Boundary Size (PFail): 0 00:23:20.249 Atomic Boundary Offset: 0 00:23:20.249 Maximum Single Source Range Length: 65535 00:23:20.249 Maximum Copy Length: 65535 00:23:20.249 Maximum Source Range Count: 1 00:23:20.249 NGUID/EUI64 Never Reused: No 00:23:20.249 Namespace Write Protected: No 00:23:20.249 Number of LBA Formats: 1 00:23:20.249 Current LBA Format: LBA Format #00 00:23:20.249 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:20.249 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:20.249 rmmod nvme_rdma 00:23:20.249 rmmod nvme_fabrics 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3018630 ']' 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3018630 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3018630 ']' 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3018630 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3018630 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3018630' 00:23:20.249 killing process with pid 3018630 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3018630 00:23:20.249 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3018630 00:23:20.511 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:20.511 10:30:57 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:20.511 00:23:20.511 real 0m10.050s 00:23:20.511 user 0m8.931s 00:23:20.511 sys 0m6.395s 00:23:20.511 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:20.511 10:30:57 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:20.511 ************************************ 00:23:20.511 END TEST nvmf_identify 00:23:20.511 ************************************ 00:23:20.511 10:30:57 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:23:20.511 10:30:57 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:20.511 10:30:57 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:20.511 10:30:57 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:20.511 10:30:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:20.511 ************************************ 00:23:20.511 START TEST nvmf_perf 00:23:20.511 ************************************ 00:23:20.511 10:30:57 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:23:20.773 * Looking for test storage... 00:23:20.773 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:20.773 10:30:57 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:23:28.919 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:23:28.919 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:23:28.919 Found net devices under 0000:98:00.0: mlx_0_0 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:23:28.919 Found net devices under 0000:98:00.1: mlx_0_1 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:28.919 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:28.919 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:23:28.919 altname enp152s0f0np0 00:23:28.919 altname ens817f0np0 00:23:28.919 inet 192.168.100.8/24 scope global mlx_0_0 00:23:28.919 valid_lft forever preferred_lft forever 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:28.919 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:28.919 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:23:28.919 altname enp152s0f1np1 00:23:28.919 altname ens817f1np1 00:23:28.919 inet 192.168.100.9/24 scope global mlx_0_1 00:23:28.919 valid_lft forever preferred_lft forever 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:28.919 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:28.920 192.168.100.9' 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:28.920 192.168.100.9' 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:28.920 192.168.100.9' 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3023188 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3023188 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3023188 ']' 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:28.920 10:31:05 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:28.920 [2024-07-15 10:31:05.957654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:28.920 [2024-07-15 10:31:05.957738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.920 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.920 [2024-07-15 10:31:06.029984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.920 [2024-07-15 10:31:06.105816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.920 [2024-07-15 10:31:06.105856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.920 [2024-07-15 10:31:06.105864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.920 [2024-07-15 10:31:06.105870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.920 [2024-07-15 10:31:06.105876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.920 [2024-07-15 10:31:06.106016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.920 [2024-07-15 10:31:06.106035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.920 [2024-07-15 10:31:06.106107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.920 [2024-07-15 10:31:06.106108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:29.860 10:31:06 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.860 10:31:06 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:23:29.860 10:31:06 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.860 10:31:06 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:29.860 10:31:06 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:29.860 10:31:06 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.860 10:31:06 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:29.860 10:31:06 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:30.120 10:31:07 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:30.120 10:31:07 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:30.381 10:31:07 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:23:30.381 10:31:07 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:30.642 10:31:07 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:30.642 10:31:07 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:23:30.642 10:31:07 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:30.642 10:31:07 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:23:30.642 10:31:07 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:23:30.642 [2024-07-15 10:31:07.765583] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:23:30.642 [2024-07-15 10:31:07.796473] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x153d220/0x166b300) succeed. 00:23:30.642 [2024-07-15 10:31:07.811288] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x153e860/0x154b180) succeed. 00:23:30.902 10:31:07 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:31.162 10:31:08 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:31.162 10:31:08 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:31.162 10:31:08 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:31.162 10:31:08 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:31.423 10:31:08 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:31.423 [2024-07-15 10:31:08.594833] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:31.684 10:31:08 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:23:31.684 10:31:08 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:23:31.684 10:31:08 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:31.684 10:31:08 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:31.684 10:31:08 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:23:33.067 Initializing NVMe Controllers 00:23:33.067 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:23:33.067 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:23:33.067 Initialization complete. Launching workers. 00:23:33.067 ======================================================== 00:23:33.067 Latency(us) 00:23:33.067 Device Information : IOPS MiB/s Average min max 00:23:33.067 PCIE (0000:65:00.0) NSID 1 from core 0: 79763.28 311.58 400.64 62.68 6203.63 00:23:33.067 ======================================================== 00:23:33.067 Total : 79763.28 311.58 400.64 62.68 6203.63 00:23:33.067 00:23:33.067 10:31:10 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:33.067 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.368 Initializing NVMe Controllers 00:23:36.368 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:36.368 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:36.368 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:36.368 Initialization complete. Launching workers. 00:23:36.368 ======================================================== 00:23:36.368 Latency(us) 00:23:36.368 Device Information : IOPS MiB/s Average min max 00:23:36.368 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9696.97 37.88 102.86 37.39 4066.61 00:23:36.368 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7231.98 28.25 138.00 52.24 4091.35 00:23:36.368 ======================================================== 00:23:36.368 Total : 16928.95 66.13 117.87 37.39 4091.35 00:23:36.368 00:23:36.368 10:31:13 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:36.368 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.570 Initializing NVMe Controllers 00:23:40.570 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:40.570 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:40.570 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:40.570 Initialization complete. Launching workers. 00:23:40.570 ======================================================== 00:23:40.570 Latency(us) 00:23:40.570 Device Information : IOPS MiB/s Average min max 00:23:40.570 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 20402.98 79.70 1568.24 416.13 5395.30 00:23:40.570 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7971.43 5902.67 8889.77 00:23:40.570 ======================================================== 00:23:40.570 Total : 24434.98 95.45 2624.82 416.13 8889.77 00:23:40.570 00:23:40.570 10:31:16 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:23:40.570 10:31:16 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:23:40.570 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.775 Initializing NVMe Controllers 00:23:44.775 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.775 Controller IO queue size 128, less than required. 00:23:44.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:44.775 Controller IO queue size 128, less than required. 00:23:44.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:44.775 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:44.775 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:44.775 Initialization complete. Launching workers. 00:23:44.775 ======================================================== 00:23:44.775 Latency(us) 00:23:44.775 Device Information : IOPS MiB/s Average min max 00:23:44.775 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5092.47 1273.12 25199.08 10004.48 63316.78 00:23:44.775 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5152.47 1288.12 24614.84 11585.94 46544.24 00:23:44.775 ======================================================== 00:23:44.775 Total : 10244.95 2561.24 24905.25 10004.48 63316.78 00:23:44.775 00:23:44.775 10:31:21 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:23:44.775 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.775 No valid NVMe controllers or AIO or URING devices found 00:23:44.775 Initializing NVMe Controllers 00:23:44.775 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:44.775 Controller IO queue size 128, less than required. 00:23:44.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:44.775 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:44.775 Controller IO queue size 128, less than required. 00:23:44.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:44.775 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:44.775 WARNING: Some requested NVMe devices were skipped 00:23:44.775 10:31:21 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:23:44.775 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.980 Initializing NVMe Controllers 00:23:48.980 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.980 Controller IO queue size 128, less than required. 00:23:48.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.980 Controller IO queue size 128, less than required. 00:23:48.980 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.980 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:48.980 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:48.980 Initialization complete. Launching workers. 00:23:48.980 00:23:48.980 ==================== 00:23:48.980 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:48.980 RDMA transport: 00:23:48.980 dev name: mlx5_0 00:23:48.980 polls: 270329 00:23:48.980 idle_polls: 266081 00:23:48.980 completions: 54030 00:23:48.980 queued_requests: 1 00:23:48.980 total_send_wrs: 27015 00:23:48.980 send_doorbell_updates: 3805 00:23:48.980 total_recv_wrs: 27142 00:23:48.980 recv_doorbell_updates: 3806 00:23:48.980 --------------------------------- 00:23:48.980 00:23:48.980 ==================== 00:23:48.980 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:48.980 RDMA transport: 00:23:48.980 dev name: mlx5_0 00:23:48.980 polls: 271242 00:23:48.980 idle_polls: 270975 00:23:48.980 completions: 17950 00:23:48.980 queued_requests: 1 00:23:48.980 total_send_wrs: 8975 00:23:48.980 send_doorbell_updates: 254 00:23:48.980 total_recv_wrs: 9102 00:23:48.980 recv_doorbell_updates: 255 00:23:48.980 --------------------------------- 00:23:48.980 ======================================================== 00:23:48.980 Latency(us) 00:23:48.980 Device Information : IOPS MiB/s Average min max 00:23:48.980 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6749.87 1687.47 18967.83 8455.54 46950.23 00:23:48.980 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2242.29 560.57 57266.73 29497.25 84260.89 00:23:48.980 ======================================================== 00:23:48.980 Total : 8992.16 2248.04 28518.08 8455.54 84260.89 00:23:48.980 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:49.241 rmmod nvme_rdma 00:23:49.241 rmmod nvme_fabrics 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3023188 ']' 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3023188 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3023188 ']' 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3023188 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.241 10:31:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3023188 00:23:49.503 10:31:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.503 10:31:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.503 10:31:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3023188' 00:23:49.503 killing process with pid 3023188 00:23:49.503 10:31:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3023188 00:23:49.503 10:31:26 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3023188 00:23:51.413 10:31:28 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:51.413 10:31:28 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:51.413 00:23:51.413 real 0m30.825s 00:23:51.413 user 1m32.882s 00:23:51.413 sys 0m7.043s 00:23:51.413 10:31:28 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:51.413 10:31:28 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:51.413 ************************************ 00:23:51.413 END TEST nvmf_perf 00:23:51.413 ************************************ 00:23:51.413 10:31:28 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:23:51.413 10:31:28 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:23:51.413 10:31:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:51.413 10:31:28 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:51.413 10:31:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:51.413 ************************************ 00:23:51.413 START TEST nvmf_fio_host 00:23:51.413 ************************************ 00:23:51.413 10:31:28 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:23:51.674 * Looking for test storage... 00:23:51.674 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.674 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:51.675 10:31:28 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:23:59.894 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:59.894 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:23:59.895 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:23:59.895 Found net devices under 0000:98:00.0: mlx_0_0 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:23:59.895 Found net devices under 0000:98:00.1: mlx_0_1 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:59.895 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:59.895 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:23:59.895 altname enp152s0f0np0 00:23:59.895 altname ens817f0np0 00:23:59.895 inet 192.168.100.8/24 scope global mlx_0_0 00:23:59.895 valid_lft forever preferred_lft forever 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:59.895 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:59.895 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:59.895 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:23:59.895 altname enp152s0f1np1 00:23:59.896 altname ens817f1np1 00:23:59.896 inet 192.168.100.9/24 scope global mlx_0_1 00:23:59.896 valid_lft forever preferred_lft forever 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:59.896 192.168.100.9' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:59.896 192.168.100.9' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:59.896 192.168.100.9' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3031920 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3031920 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3031920 ']' 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.896 10:31:36 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.896 [2024-07-15 10:31:36.822097] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:23:59.896 [2024-07-15 10:31:36.822168] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.896 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.896 [2024-07-15 10:31:36.893580] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:59.896 [2024-07-15 10:31:36.968062] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.896 [2024-07-15 10:31:36.968101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.896 [2024-07-15 10:31:36.968109] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.896 [2024-07-15 10:31:36.968116] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.896 [2024-07-15 10:31:36.968121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.896 [2024-07-15 10:31:36.968270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.896 [2024-07-15 10:31:36.968340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.896 [2024-07-15 10:31:36.968504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.896 [2024-07-15 10:31:36.968506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.470 10:31:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.470 10:31:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:00.470 10:31:37 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:00.731 [2024-07-15 10:31:37.782113] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1dec200/0x1df06f0) succeed. 00:24:00.731 [2024-07-15 10:31:37.797007] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ded840/0x1e31d80) succeed. 00:24:00.992 10:31:37 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:00.992 10:31:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.992 10:31:37 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.992 10:31:37 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:00.992 Malloc1 00:24:00.992 10:31:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.251 10:31:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:01.512 10:31:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:01.512 [2024-07-15 10:31:38.635041] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:01.512 10:31:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:01.772 10:31:38 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:24:02.032 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:02.032 fio-3.35 00:24:02.032 Starting 1 thread 00:24:02.032 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.575 00:24:04.575 test: (groupid=0, jobs=1): err= 0: pid=3032551: Mon Jul 15 10:31:41 2024 00:24:04.575 read: IOPS=14.3k, BW=55.9MiB/s (58.6MB/s)(112MiB/2005msec) 00:24:04.575 slat (nsec): min=2045, max=36259, avg=2125.51, stdev=579.71 00:24:04.575 clat (usec): min=2417, max=8207, avg=4440.94, stdev=156.87 00:24:04.575 lat (usec): min=2444, max=8209, avg=4443.07, stdev=156.79 00:24:04.575 clat percentiles (usec): 00:24:04.575 | 1.00th=[ 3949], 5.00th=[ 4359], 10.00th=[ 4424], 20.00th=[ 4424], 00:24:04.575 | 30.00th=[ 4424], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4424], 00:24:04.575 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4490], 95.00th=[ 4490], 00:24:04.575 | 99.00th=[ 4883], 99.50th=[ 4883], 99.90th=[ 6456], 99.95th=[ 7504], 00:24:04.575 | 99.99th=[ 8160] 00:24:04.575 bw ( KiB/s): min=56288, max=57792, per=100.00%, avg=57264.00, stdev=706.54, samples=4 00:24:04.575 iops : min=14072, max=14448, avg=14316.00, stdev=176.64, samples=4 00:24:04.575 write: IOPS=14.3k, BW=56.0MiB/s (58.7MB/s)(112MiB/2005msec); 0 zone resets 00:24:04.575 slat (nsec): min=2115, max=22263, avg=2244.03, stdev=575.79 00:24:04.575 clat (usec): min=2458, max=8199, avg=4439.57, stdev=151.25 00:24:04.575 lat (usec): min=2470, max=8201, avg=4441.82, stdev=151.17 00:24:04.575 clat percentiles (usec): 00:24:04.575 | 1.00th=[ 3949], 5.00th=[ 4359], 10.00th=[ 4424], 20.00th=[ 4424], 00:24:04.575 | 30.00th=[ 4424], 40.00th=[ 4424], 50.00th=[ 4424], 60.00th=[ 4424], 00:24:04.575 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4490], 95.00th=[ 4490], 00:24:04.575 | 99.00th=[ 4883], 99.50th=[ 4883], 99.90th=[ 6521], 99.95th=[ 7504], 00:24:04.575 | 99.99th=[ 8160] 00:24:04.575 bw ( KiB/s): min=56576, max=58032, per=100.00%, avg=57384.00, stdev=604.41, samples=4 00:24:04.575 iops : min=14144, max=14508, avg=14346.00, stdev=151.10, samples=4 00:24:04.575 lat (msec) : 4=2.15%, 10=97.85% 00:24:04.575 cpu : usr=99.60%, sys=0.00%, ctx=16, majf=0, minf=5 00:24:04.575 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:04.575 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.575 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:04.575 issued rwts: total=28703,28748,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.575 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:04.575 00:24:04.575 Run status group 0 (all jobs): 00:24:04.575 READ: bw=55.9MiB/s (58.6MB/s), 55.9MiB/s-55.9MiB/s (58.6MB/s-58.6MB/s), io=112MiB (118MB), run=2005-2005msec 00:24:04.575 WRITE: bw=56.0MiB/s (58.7MB/s), 56.0MiB/s-56.0MiB/s (58.7MB/s-58.7MB/s), io=112MiB (118MB), run=2005-2005msec 00:24:04.575 10:31:41 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:04.575 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:04.575 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:04.575 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:04.575 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:04.575 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:04.575 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:04.575 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:04.575 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:04.575 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:04.576 10:31:41 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:24:04.836 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:04.836 fio-3.35 00:24:04.836 Starting 1 thread 00:24:04.836 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.378 00:24:07.378 test: (groupid=0, jobs=1): err= 0: pid=3033278: Mon Jul 15 10:31:44 2024 00:24:07.378 read: IOPS=13.9k, BW=218MiB/s (228MB/s)(429MiB/1972msec) 00:24:07.378 slat (nsec): min=3389, max=56141, avg=3634.40, stdev=1193.78 00:24:07.378 clat (usec): min=334, max=10766, avg=3397.15, stdev=1906.38 00:24:07.378 lat (usec): min=338, max=10789, avg=3400.78, stdev=1906.57 00:24:07.378 clat percentiles (usec): 00:24:07.378 | 1.00th=[ 914], 5.00th=[ 1090], 10.00th=[ 1221], 20.00th=[ 1549], 00:24:07.378 | 30.00th=[ 1893], 40.00th=[ 2343], 50.00th=[ 3032], 60.00th=[ 3720], 00:24:07.378 | 70.00th=[ 4424], 80.00th=[ 5276], 90.00th=[ 6259], 95.00th=[ 6718], 00:24:07.378 | 99.00th=[ 8094], 99.50th=[ 8356], 99.90th=[ 8717], 99.95th=[ 9503], 00:24:07.378 | 99.99th=[10683] 00:24:07.378 bw ( KiB/s): min=99264, max=115136, per=49.15%, avg=109472.00, stdev=7289.03, samples=4 00:24:07.378 iops : min= 6204, max= 7196, avg=6842.00, stdev=455.56, samples=4 00:24:07.378 write: IOPS=7859, BW=123MiB/s (129MB/s)(223MiB/1818msec); 0 zone resets 00:24:07.378 slat (usec): min=39, max=163, avg=40.92, stdev= 6.66 00:24:07.378 clat (usec): min=368, max=23925, avg=9677.53, stdev=5199.40 00:24:07.378 lat (usec): min=408, max=23965, avg=9718.44, stdev=5199.62 00:24:07.378 clat percentiles (usec): 00:24:07.378 | 1.00th=[ 2147], 5.00th=[ 2900], 10.00th=[ 3490], 20.00th=[ 4424], 00:24:07.378 | 30.00th=[ 5604], 40.00th=[ 6783], 50.00th=[ 8225], 60.00th=[11600], 00:24:07.378 | 70.00th=[14091], 80.00th=[15270], 90.00th=[16712], 95.00th=[17957], 00:24:07.378 | 99.00th=[19792], 99.50th=[20579], 99.90th=[21365], 99.95th=[22938], 00:24:07.378 | 99.99th=[23725] 00:24:07.378 bw ( KiB/s): min=105792, max=121536, per=90.58%, avg=113904.00, stdev=6963.59, samples=4 00:24:07.378 iops : min= 6612, max= 7596, avg=7119.00, stdev=435.22, samples=4 00:24:07.378 lat (usec) : 500=0.04%, 750=0.12%, 1000=1.60% 00:24:07.378 lat (msec) : 2=20.08%, 4=25.42%, 10=37.94%, 20=14.50%, 50=0.30% 00:24:07.378 cpu : usr=96.85%, sys=0.90%, ctx=183, majf=0, minf=16 00:24:07.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:24:07.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:07.378 issued rwts: total=27451,14288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:07.378 00:24:07.378 Run status group 0 (all jobs): 00:24:07.378 READ: bw=218MiB/s (228MB/s), 218MiB/s-218MiB/s (228MB/s-228MB/s), io=429MiB (450MB), run=1972-1972msec 00:24:07.378 WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=223MiB (234MB), run=1818-1818msec 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.378 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:07.378 rmmod nvme_rdma 00:24:07.378 rmmod nvme_fabrics 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3031920 ']' 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3031920 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3031920 ']' 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3031920 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3031920 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3031920' 00:24:07.639 killing process with pid 3031920 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3031920 00:24:07.639 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3031920 00:24:07.901 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:07.901 10:31:44 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:07.901 00:24:07.901 real 0m16.283s 00:24:07.901 user 1m10.483s 00:24:07.901 sys 0m6.817s 00:24:07.901 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:07.901 10:31:44 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.901 ************************************ 00:24:07.901 END TEST nvmf_fio_host 00:24:07.901 ************************************ 00:24:07.901 10:31:44 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:07.901 10:31:44 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:07.901 10:31:44 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:07.901 10:31:44 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:07.901 10:31:44 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:07.901 ************************************ 00:24:07.901 START TEST nvmf_failover 00:24:07.901 ************************************ 00:24:07.901 10:31:44 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:24:07.901 * Looking for test storage... 00:24:07.901 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.901 10:31:45 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:16.043 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:16.043 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:16.043 Found net devices under 0000:98:00.0: mlx_0_0 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.043 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:16.044 Found net devices under 0000:98:00.1: mlx_0_1 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:16.044 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:16.044 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:24:16.044 altname enp152s0f0np0 00:24:16.044 altname ens817f0np0 00:24:16.044 inet 192.168.100.8/24 scope global mlx_0_0 00:24:16.044 valid_lft forever preferred_lft forever 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:16.044 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:16.044 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:24:16.044 altname enp152s0f1np1 00:24:16.044 altname ens817f1np1 00:24:16.044 inet 192.168.100.9/24 scope global mlx_0_1 00:24:16.044 valid_lft forever preferred_lft forever 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:16.044 10:31:52 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:16.044 192.168.100.9' 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:16.044 192.168.100.9' 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:16.044 192.168.100.9' 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3037951 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3037951 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3037951 ']' 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.044 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.044 [2024-07-15 10:31:53.132091] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:16.044 [2024-07-15 10:31:53.132134] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.044 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.044 [2024-07-15 10:31:53.209135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:16.305 [2024-07-15 10:31:53.280571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.305 [2024-07-15 10:31:53.280609] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.305 [2024-07-15 10:31:53.280617] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.305 [2024-07-15 10:31:53.280624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.305 [2024-07-15 10:31:53.280630] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.305 [2024-07-15 10:31:53.280756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.305 [2024-07-15 10:31:53.280929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.305 [2024-07-15 10:31:53.280929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.875 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.875 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:16.875 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.875 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:16.875 10:31:53 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:16.875 10:31:53 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.875 10:31:53 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:17.136 [2024-07-15 10:31:54.142748] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24f2920/0x24f6e10) succeed. 00:24:17.136 [2024-07-15 10:31:54.155819] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24f3ec0/0x25384a0) succeed. 00:24:17.136 10:31:54 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:17.396 Malloc0 00:24:17.396 10:31:54 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.656 10:31:54 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.656 10:31:54 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:17.916 [2024-07-15 10:31:54.919514] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:17.917 10:31:54 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:17.917 [2024-07-15 10:31:55.083716] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:18.177 10:31:55 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:18.177 [2024-07-15 10:31:55.252317] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:18.177 10:31:55 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3038470 00:24:18.177 10:31:55 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:18.177 10:31:55 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:18.177 10:31:55 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3038470 /var/tmp/bdevperf.sock 00:24:18.177 10:31:55 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3038470 ']' 00:24:18.177 10:31:55 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.177 10:31:55 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:18.177 10:31:55 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.177 10:31:55 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:18.177 10:31:55 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:19.118 10:31:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:19.118 10:31:56 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:19.118 10:31:56 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:19.378 NVMe0n1 00:24:19.378 10:31:56 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:19.378 00:24:19.638 10:31:56 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3038652 00:24:19.638 10:31:56 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:19.638 10:31:56 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:20.577 10:31:57 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:20.577 10:31:57 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:23.874 10:32:00 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:23.874 00:24:23.874 10:32:01 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:24.134 10:32:01 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:27.434 10:32:04 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:27.434 [2024-07-15 10:32:04.355791] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:27.434 10:32:04 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:28.376 10:32:05 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:28.376 10:32:05 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 3038652 00:24:34.969 0 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 3038470 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3038470 ']' 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3038470 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3038470 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3038470' 00:24:34.969 killing process with pid 3038470 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3038470 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3038470 00:24:34.969 10:32:11 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:34.969 [2024-07-15 10:31:55.326706] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:34.969 [2024-07-15 10:31:55.326764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038470 ] 00:24:34.969 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.969 [2024-07-15 10:31:55.394739] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.969 [2024-07-15 10:31:55.459994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.969 Running I/O for 15 seconds... 00:24:34.969 [2024-07-15 10:31:58.750181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:14000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.969 [2024-07-15 10:31:58.750612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.969 [2024-07-15 10:31:58.750623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.750984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.750991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.751007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.751024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.751041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.751057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.751073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.751090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.751106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.751122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.751138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.751154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.970 [2024-07-15 10:31:58.751170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182f00 00:24:34.970 [2024-07-15 10:31:58.751187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182f00 00:24:34.970 [2024-07-15 10:31:58.751204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182f00 00:24:34.970 [2024-07-15 10:31:58.751221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182f00 00:24:34.970 [2024-07-15 10:31:58.751249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182f00 00:24:34.970 [2024-07-15 10:31:58.751267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182f00 00:24:34.970 [2024-07-15 10:31:58.751283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182f00 00:24:34.970 [2024-07-15 10:31:58.751300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.970 [2024-07-15 10:31:58.751309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182f00 00:24:34.970 [2024-07-15 10:31:58.751317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:13384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.971 [2024-07-15 10:31:58.751917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182f00 00:24:34.971 [2024-07-15 10:31:58.751924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.751933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.751940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.751949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.751956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.751966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.751973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.751982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.751989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.752356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:31:58.752363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.754687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.972 [2024-07-15 10:31:58.754698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.972 [2024-07-15 10:31:58.754705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13872 len:8 PRP1 0x0 PRP2 0x0 00:24:34.972 [2024-07-15 10:31:58.754716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:31:58.754748] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:24:34.972 [2024-07-15 10:31:58.754757] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:34.972 [2024-07-15 10:31:58.754765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.972 [2024-07-15 10:31:58.758359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.972 [2024-07-15 10:31:58.778366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:34.972 [2024-07-15 10:31:58.822863] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.972 [2024-07-15 10:32:02.187544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182f00 00:24:34.972 [2024-07-15 10:32:02.187589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:32:02.187606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-07-15 10:32:02.187614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:32:02.187624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-07-15 10:32:02.187632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:32:02.187641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-07-15 10:32:02.187648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:32:02.187657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-07-15 10:32:02.187664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:32:02.187673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-07-15 10:32:02.187681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:32:02.187690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-07-15 10:32:02.187697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:32:02.187706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-07-15 10:32:02.187713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:32:02.187722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.972 [2024-07-15 10:32:02.187729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.972 [2024-07-15 10:32:02.187747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.187755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.187772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.187788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.187805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.187821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.187838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.187856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.187872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-07-15 10:32:02.187888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-07-15 10:32:02.187905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-07-15 10:32:02.187921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-07-15 10:32:02.187938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-07-15 10:32:02.187955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-07-15 10:32:02.187972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-07-15 10:32:02.187989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.187998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.973 [2024-07-15 10:32:02.188005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.973 [2024-07-15 10:32:02.188268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182f00 00:24:34.973 [2024-07-15 10:32:02.188275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182f00 00:24:34.974 [2024-07-15 10:32:02.188342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182f00 00:24:34.974 [2024-07-15 10:32:02.188358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182f00 00:24:34.974 [2024-07-15 10:32:02.188375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x182f00 00:24:34.974 [2024-07-15 10:32:02.188391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182f00 00:24:34.974 [2024-07-15 10:32:02.188408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182f00 00:24:34.974 [2024-07-15 10:32:02.188425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182f00 00:24:34.974 [2024-07-15 10:32:02.188441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182f00 00:24:34.974 [2024-07-15 10:32:02.188458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182f00 00:24:34.974 [2024-07-15 10:32:02.188737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.974 [2024-07-15 10:32:02.188945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.974 [2024-07-15 10:32:02.188954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.975 [2024-07-15 10:32:02.188961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.188970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.975 [2024-07-15 10:32:02.188977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.188986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.975 [2024-07-15 10:32:02.188993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.975 [2024-07-15 10:32:02.189410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.975 [2024-07-15 10:32:02.189426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.975 [2024-07-15 10:32:02.189442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.975 [2024-07-15 10:32:02.189458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.975 [2024-07-15 10:32:02.189474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.975 [2024-07-15 10:32:02.189494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.975 [2024-07-15 10:32:02.189512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.975 [2024-07-15 10:32:02.189527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.975 [2024-07-15 10:32:02.189619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182f00 00:24:34.975 [2024-07-15 10:32:02.189626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:02.189635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:02.189642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:02.189651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:02.189658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:02.189667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:02.189674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:02.189683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:02.189689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:02.192082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.976 [2024-07-15 10:32:02.192094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.976 [2024-07-15 10:32:02.192101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:24:34.976 [2024-07-15 10:32:02.192109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:02.192141] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:34.976 [2024-07-15 10:32:02.192150] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:24:34.976 [2024-07-15 10:32:02.192161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.976 [2024-07-15 10:32:02.195742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.976 [2024-07-15 10:32:02.215885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:34.976 [2024-07-15 10:32:02.275235] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.976 [2024-07-15 10:32:06.535521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.535744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:7640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182f00 00:24:34.976 [2024-07-15 10:32:06.535877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.535896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.535912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.535928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.535945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.535961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.535977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.535987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.535994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.536003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.536010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.536019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.536026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.536035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.536041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.536051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.536057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.536066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.536073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.536082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.976 [2024-07-15 10:32:06.536090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.976 [2024-07-15 10:32:06.536100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.977 [2024-07-15 10:32:06.536485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:7696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.977 [2024-07-15 10:32:06.536677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x182f00 00:24:34.977 [2024-07-15 10:32:06.536685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.536885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.536901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.536917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.536933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.536948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.536964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.536980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.536990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.536997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.537013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.537029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.537046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.537067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.537083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.537099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.537115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.537131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.537147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.537163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.537179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.537196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.537212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.537228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.537250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.537268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:7952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.537285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.537301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x182f00 00:24:34.978 [2024-07-15 10:32:06.537318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.537333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.537350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.978 [2024-07-15 10:32:06.537359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.978 [2024-07-15 10:32:06.537366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.979 [2024-07-15 10:32:06.537382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.979 [2024-07-15 10:32:06.537398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.979 [2024-07-15 10:32:06.537414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.979 [2024-07-15 10:32:06.537430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.979 [2024-07-15 10:32:06.537446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182f00 00:24:34.979 [2024-07-15 10:32:06.537463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182f00 00:24:34.979 [2024-07-15 10:32:06.537479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182f00 00:24:34.979 [2024-07-15 10:32:06.537496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x182f00 00:24:34.979 [2024-07-15 10:32:06.537513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182f00 00:24:34.979 [2024-07-15 10:32:06.537529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x182f00 00:24:34.979 [2024-07-15 10:32:06.537546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182f00 00:24:34.979 [2024-07-15 10:32:06.537562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x182f00 00:24:34.979 [2024-07-15 10:32:06.537578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.979 [2024-07-15 10:32:06.537594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.979 [2024-07-15 10:32:06.537611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.979 [2024-07-15 10:32:06.537626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.979 [2024-07-15 10:32:06.537642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:34.979 [2024-07-15 10:32:06.537660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:e9b70000 sqhd:52b0 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.537761] rdma_provider_verbs.c: 86:spdk_rdma_provider_qp_destroy: *WARNING*: Destroying qpair with queued Work Requests 00:24:34.979 [2024-07-15 10:32:06.539833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:34.979 [2024-07-15 10:32:06.539842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:34.979 [2024-07-15 10:32:06.539850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8528 len:8 PRP1 0x0 PRP2 0x0 00:24:34.979 [2024-07-15 10:32:06.539857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:34.979 [2024-07-15 10:32:06.539888] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:24:34.979 [2024-07-15 10:32:06.539898] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:24:34.979 [2024-07-15 10:32:06.539906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:34.979 [2024-07-15 10:32:06.543484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:34.979 [2024-07-15 10:32:06.564865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:34.979 [2024-07-15 10:32:06.616074] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:34.979 00:24:34.979 Latency(us) 00:24:34.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.979 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:34.979 Verification LBA range: start 0x0 length 0x4000 00:24:34.979 NVMe0n1 : 15.00 13287.52 51.90 252.00 0.00 9426.29 339.63 1020613.97 00:24:34.979 =================================================================================================================== 00:24:34.979 Total : 13287.52 51.90 252.00 0.00 9426.29 339.63 1020613.97 00:24:34.979 Received shutdown signal, test time was about 15.000000 seconds 00:24:34.979 00:24:34.979 Latency(us) 00:24:34.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.979 =================================================================================================================== 00:24:34.979 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3041662 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3041662 /var/tmp/bdevperf.sock 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3041662 ']' 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:34.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.979 10:32:11 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:35.919 10:32:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.919 10:32:12 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:24:35.919 10:32:12 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:35.919 [2024-07-15 10:32:12.912113] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:35.919 10:32:12 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:24:35.919 [2024-07-15 10:32:13.080647] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:24:35.919 10:32:13 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.179 NVMe0n1 00:24:36.179 10:32:13 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.439 00:24:36.439 10:32:13 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.700 00:24:36.700 10:32:13 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:36.700 10:32:13 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:36.960 10:32:14 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:37.221 10:32:14 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:40.526 10:32:17 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:40.526 10:32:17 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:40.526 10:32:17 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:40.526 10:32:17 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3042686 00:24:40.526 10:32:17 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 3042686 00:24:41.469 0 00:24:41.469 10:32:18 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:41.469 [2024-07-15 10:32:11.997538] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:41.469 [2024-07-15 10:32:11.997595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3041662 ] 00:24:41.469 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.469 [2024-07-15 10:32:12.063464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.469 [2024-07-15 10:32:12.127125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.469 [2024-07-15 10:32:14.136548] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:24:41.469 [2024-07-15 10:32:14.137276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.469 [2024-07-15 10:32:14.137313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.469 [2024-07-15 10:32:14.165586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:41.469 [2024-07-15 10:32:14.191259] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:41.469 Running I/O for 1 seconds... 00:24:41.469 00:24:41.469 Latency(us) 00:24:41.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.469 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:41.469 Verification LBA range: start 0x0 length 0x4000 00:24:41.469 NVMe0n1 : 1.00 16961.36 66.26 0.00 0.00 7500.09 2170.88 12615.68 00:24:41.469 =================================================================================================================== 00:24:41.469 Total : 16961.36 66.26 0.00 0.00 7500.09 2170.88 12615.68 00:24:41.469 10:32:18 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.469 10:32:18 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:41.469 10:32:18 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.743 10:32:18 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.743 10:32:18 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:42.047 10:32:18 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:42.047 10:32:19 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 3041662 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3041662 ']' 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3041662 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3041662 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3041662' 00:24:45.378 killing process with pid 3041662 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3041662 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3041662 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:45.378 10:32:22 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:45.638 rmmod nvme_rdma 00:24:45.638 rmmod nvme_fabrics 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3037951 ']' 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3037951 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3037951 ']' 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3037951 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3037951 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3037951' 00:24:45.638 killing process with pid 3037951 00:24:45.638 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3037951 00:24:45.639 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3037951 00:24:45.899 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:45.899 10:32:22 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:45.899 00:24:45.899 real 0m38.019s 00:24:45.899 user 2m2.202s 00:24:45.899 sys 0m7.679s 00:24:45.899 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:45.899 10:32:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:45.899 ************************************ 00:24:45.899 END TEST nvmf_failover 00:24:45.899 ************************************ 00:24:45.899 10:32:23 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:45.899 10:32:23 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:45.899 10:32:23 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:45.899 10:32:23 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:45.899 10:32:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:45.899 ************************************ 00:24:45.899 START TEST nvmf_host_discovery 00:24:45.899 ************************************ 00:24:45.899 10:32:23 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:24:46.160 * Looking for test storage... 00:24:46.160 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:24:46.160 10:32:23 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:46.160 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:46.161 10:32:23 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:24:46.161 00:24:46.161 real 0m0.129s 00:24:46.161 user 0m0.061s 00:24:46.161 sys 0m0.076s 00:24:46.161 10:32:23 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:46.161 10:32:23 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:46.161 ************************************ 00:24:46.161 END TEST nvmf_host_discovery 00:24:46.161 ************************************ 00:24:46.161 10:32:23 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:24:46.161 10:32:23 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:24:46.161 10:32:23 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:46.161 10:32:23 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:46.161 10:32:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:46.161 ************************************ 00:24:46.161 START TEST nvmf_host_multipath_status 00:24:46.161 ************************************ 00:24:46.161 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:24:46.161 * Looking for test storage... 00:24:46.161 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:46.161 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:46.161 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:46.422 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:46.423 10:32:23 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:24:54.581 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:24:54.581 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:24:54.581 Found net devices under 0000:98:00.0: mlx_0_0 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:24:54.581 Found net devices under 0000:98:00.1: mlx_0_1 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:54.581 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:54.581 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:24:54.581 altname enp152s0f0np0 00:24:54.581 altname ens817f0np0 00:24:54.581 inet 192.168.100.8/24 scope global mlx_0_0 00:24:54.581 valid_lft forever preferred_lft forever 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:54.581 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:54.582 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:54.582 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:24:54.582 altname enp152s0f1np1 00:24:54.582 altname ens817f1np1 00:24:54.582 inet 192.168.100.9/24 scope global mlx_0_1 00:24:54.582 valid_lft forever preferred_lft forever 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:54.582 192.168.100.9' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:54.582 192.168.100.9' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:54.582 192.168.100.9' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3048099 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3048099 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3048099 ']' 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:54.582 10:32:31 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:54.582 [2024-07-15 10:32:31.640326] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:24:54.582 [2024-07-15 10:32:31.640393] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.582 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.582 [2024-07-15 10:32:31.715419] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:54.843 [2024-07-15 10:32:31.790221] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.843 [2024-07-15 10:32:31.790271] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.843 [2024-07-15 10:32:31.790279] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.843 [2024-07-15 10:32:31.790286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.843 [2024-07-15 10:32:31.790291] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.843 [2024-07-15 10:32:31.790361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.843 [2024-07-15 10:32:31.790377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.413 10:32:32 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.413 10:32:32 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:55.413 10:32:32 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:55.413 10:32:32 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:55.413 10:32:32 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:55.413 10:32:32 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.413 10:32:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3048099 00:24:55.413 10:32:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:55.673 [2024-07-15 10:32:32.614944] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20edb70/0x20f2060) succeed. 00:24:55.673 [2024-07-15 10:32:32.627170] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20ef070/0x21336f0) succeed. 00:24:55.673 10:32:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:55.932 Malloc0 00:24:55.932 10:32:32 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:55.932 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:56.190 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:56.190 [2024-07-15 10:32:33.315511] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:56.190 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:24:56.449 [2024-07-15 10:32:33.467647] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:56.449 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3048512 00:24:56.449 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:56.449 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:56.449 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3048512 /var/tmp/bdevperf.sock 00:24:56.449 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3048512 ']' 00:24:56.449 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.449 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:56.449 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.449 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:56.449 10:32:33 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:57.387 10:32:34 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:57.387 10:32:34 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:57.388 10:32:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:57.388 10:32:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:57.648 Nvme0n1 00:24:57.648 10:32:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:57.908 Nvme0n1 00:24:57.908 10:32:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:57.908 10:32:34 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:59.819 10:32:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:59.819 10:32:36 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:25:00.079 10:32:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:00.338 10:32:37 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:01.277 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:01.277 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:01.277 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.277 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:01.537 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.537 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:01.537 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.537 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:01.537 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:01.537 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:01.537 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.537 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:01.797 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.797 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:01.797 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.797 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:01.797 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:01.797 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:01.797 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:01.797 10:32:38 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:02.058 10:32:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.058 10:32:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:02.058 10:32:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:02.058 10:32:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.318 10:32:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.318 10:32:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:02.318 10:32:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:02.318 10:32:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:02.605 10:32:39 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:03.544 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:03.544 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:03.544 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.544 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:03.805 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:03.805 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:03.805 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.805 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:03.805 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.805 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:03.805 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.805 10:32:40 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:04.064 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.064 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:04.064 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.064 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:04.322 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.322 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:04.322 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.322 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:04.322 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.322 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:04.323 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.323 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:04.582 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:04.582 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:04.582 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:04.840 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:25:04.840 10:32:41 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:06.220 10:32:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:06.220 10:32:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:06.220 10:32:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.220 10:32:42 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:06.220 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.220 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:06.220 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.220 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:06.220 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:06.220 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:06.220 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.220 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:06.479 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.479 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:06.479 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.479 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:06.740 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.740 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:06.740 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:06.740 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.740 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.740 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:06.740 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:06.740 10:32:43 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:07.000 10:32:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.000 10:32:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:07.000 10:32:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:07.000 10:32:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:07.260 10:32:44 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:08.202 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:08.202 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:08.202 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.202 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:08.463 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.463 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:08.463 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.463 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:08.722 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.722 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:08.722 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.722 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:08.722 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.722 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:08.722 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.722 10:32:45 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:08.982 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.982 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:08.982 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.982 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:09.261 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:09.261 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:09.261 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:09.261 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:09.261 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:09.261 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:09.261 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:25:09.521 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:09.782 10:32:46 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:10.723 10:32:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:10.723 10:32:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:10.723 10:32:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.723 10:32:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.723 10:32:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.723 10:32:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:10.723 10:32:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.723 10:32:47 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.984 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.984 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.984 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.984 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:11.244 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.244 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:11.244 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.244 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:11.244 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.244 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:11.244 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.244 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.505 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:11.505 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:11.505 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.505 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:11.765 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:11.765 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:11.765 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:25:11.765 10:32:48 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:12.026 10:32:49 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:12.966 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:12.966 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:12.966 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.966 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.226 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.226 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:13.226 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.226 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.486 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.486 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:13.486 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.486 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:13.486 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.486 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:13.486 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.486 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:13.747 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.747 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:13.747 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.747 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:13.747 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.747 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:13.747 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.747 10:32:50 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.007 10:32:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:14.007 10:32:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:14.266 10:32:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:14.266 10:32:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:25:14.266 10:32:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:14.526 10:32:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:15.543 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:15.543 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:15.543 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.543 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:15.543 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.543 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:15.543 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:15.543 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.804 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.804 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:15.804 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.804 10:32:52 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:16.065 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.065 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:16.065 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.065 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:16.065 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.065 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:16.065 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.065 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:16.325 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.325 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:16.325 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.325 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:16.584 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.584 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:16.584 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:16.584 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:25:16.844 10:32:53 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:17.784 10:32:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:17.784 10:32:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:17.784 10:32:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:17.784 10:32:54 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.044 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:18.044 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:18.044 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.044 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.305 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.305 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.305 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.305 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:18.305 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.305 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:18.305 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.305 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:18.566 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.566 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:18.566 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.566 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:18.566 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.566 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:18.566 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.566 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:18.828 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.828 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:18.828 10:32:55 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:19.089 10:32:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:25:19.089 10:32:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:20.475 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.736 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.736 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:20.736 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.736 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:20.736 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.736 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:20.997 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.997 10:32:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:20.997 10:32:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:20.997 10:32:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:20.997 10:32:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:20.997 10:32:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:21.258 10:32:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.258 10:32:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:21.258 10:32:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:25:21.519 10:32:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:25:21.519 10:32:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:22.461 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:22.461 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:22.461 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.461 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:22.722 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.722 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:22.722 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.722 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:22.983 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:22.983 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:22.983 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.983 10:32:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:22.983 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.983 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:22.983 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.983 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:23.245 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.245 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:23.245 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.245 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3048512 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3048512 ']' 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3048512 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3048512 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3048512' 00:25:23.507 killing process with pid 3048512 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3048512 00:25:23.507 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3048512 00:25:23.774 Connection closed with partial response: 00:25:23.774 00:25:23.774 00:25:23.774 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3048512 00:25:23.774 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:23.774 [2024-07-15 10:32:33.536961] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:25:23.774 [2024-07-15 10:32:33.537019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3048512 ] 00:25:23.774 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.774 [2024-07-15 10:32:33.593081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.774 [2024-07-15 10:32:33.645270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.774 Running I/O for 90 seconds... 00:25:23.775 [2024-07-15 10:32:46.536285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:23.775 [2024-07-15 10:32:46.536955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.775 [2024-07-15 10:32:46.536960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.536970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.536975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.536984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.536989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.536998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.776 [2024-07-15 10:32:46.537702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:23.776 [2024-07-15 10:32:46.537713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.777 [2024-07-15 10:32:46.537718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.777 [2024-07-15 10:32:46.537736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.777 [2024-07-15 10:32:46.537751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.777 [2024-07-15 10:32:46.537767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.777 [2024-07-15 10:32:46.537783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.537800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:26632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.537816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.537832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.537848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.537865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.777 [2024-07-15 10:32:46.537880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.777 [2024-07-15 10:32:46.537896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.537913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.537930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.537946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.537962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.537979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.537990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.537995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:26792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:26816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:23.777 [2024-07-15 10:32:46.538476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x184000 00:25:23.777 [2024-07-15 10:32:46.538481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:26872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:26888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:26928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:26960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:46.538857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:46.538862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.599917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.599957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.600227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:48248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:58.600246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:58.600258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:58.600271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.600283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.600295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.600308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:48352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:58.600321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.600333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:48808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.600345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.600358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.600372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:58.600385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:48848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.600398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.600410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:58.600423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.778 [2024-07-15 10:32:58.600435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:23.778 [2024-07-15 10:32:58.600442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:48512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x184000 00:25:23.778 [2024-07-15 10:32:58.600447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:48904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:48584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:48944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:48600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:48632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:48192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:48336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:48368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:48376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:48392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.779 [2024-07-15 10:32:58.600960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:23.779 [2024-07-15 10:32:58.600967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184000 00:25:23.779 [2024-07-15 10:32:58.600972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.600980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:48448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184000 00:25:23.780 [2024-07-15 10:32:58.600986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.600993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:48456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x184000 00:25:23.780 [2024-07-15 10:32:58.600998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:48480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184000 00:25:23.780 [2024-07-15 10:32:58.601010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:48496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184000 00:25:23.780 [2024-07-15 10:32:58.601023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:48520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184000 00:25:23.780 [2024-07-15 10:32:58.601035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184000 00:25:23.780 [2024-07-15 10:32:58.601047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.780 [2024-07-15 10:32:58.601059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184000 00:25:23.780 [2024-07-15 10:32:58.601093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:48592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184000 00:25:23.780 [2024-07-15 10:32:58.601106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:49144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.780 [2024-07-15 10:32:58.601118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.780 [2024-07-15 10:32:58.601130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.780 [2024-07-15 10:32:58.601142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x184000 00:25:23.780 [2024-07-15 10:32:58.601156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x184000 00:25:23.780 [2024-07-15 10:32:58.601168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.780 [2024-07-15 10:32:58.601180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:23.780 [2024-07-15 10:32:58.601192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:23.780 [2024-07-15 10:32:58.601200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184000 00:25:23.780 [2024-07-15 10:32:58.601205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:23.780 Received shutdown signal, test time was about 25.597184 seconds 00:25:23.780 00:25:23.780 Latency(us) 00:25:23.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.780 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:23.780 Verification LBA range: start 0x0 length 0x4000 00:25:23.780 Nvme0n1 : 25.60 15514.15 60.60 0.00 0.00 8229.56 556.37 3019898.88 00:25:23.780 =================================================================================================================== 00:25:23.780 Total : 15514.15 60.60 0.00 0.00 8229.56 556.37 3019898.88 00:25:23.780 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:24.041 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:24.041 10:33:00 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:24.041 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:24.041 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:24.041 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:24.041 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:24.041 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:24.041 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:24.041 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:24.042 rmmod nvme_rdma 00:25:24.042 rmmod nvme_fabrics 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3048099 ']' 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3048099 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3048099 ']' 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3048099 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3048099 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3048099' 00:25:24.042 killing process with pid 3048099 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3048099 00:25:24.042 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3048099 00:25:24.304 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:24.304 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:24.304 00:25:24.304 real 0m38.075s 00:25:24.304 user 1m43.138s 00:25:24.304 sys 0m9.320s 00:25:24.304 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:24.304 10:33:01 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:24.304 ************************************ 00:25:24.304 END TEST nvmf_host_multipath_status 00:25:24.304 ************************************ 00:25:24.304 10:33:01 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:25:24.304 10:33:01 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:25:24.304 10:33:01 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:24.304 10:33:01 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.304 10:33:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:24.304 ************************************ 00:25:24.304 START TEST nvmf_discovery_remove_ifc 00:25:24.304 ************************************ 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:25:24.304 * Looking for test storage... 00:25:24.304 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:25:24.304 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:25:24.304 00:25:24.304 real 0m0.093s 00:25:24.304 user 0m0.038s 00:25:24.304 sys 0m0.060s 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:24.304 10:33:01 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:24.304 ************************************ 00:25:24.304 END TEST nvmf_discovery_remove_ifc 00:25:24.304 ************************************ 00:25:24.566 10:33:01 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:25:24.566 10:33:01 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:25:24.566 10:33:01 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:24.566 10:33:01 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.566 10:33:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:24.566 ************************************ 00:25:24.566 START TEST nvmf_identify_kernel_target 00:25:24.566 ************************************ 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:25:24.566 * Looking for test storage... 00:25:24.566 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:24.566 10:33:01 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:25:32.710 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:25:32.710 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:25:32.710 Found net devices under 0000:98:00.0: mlx_0_0 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:25:32.710 Found net devices under 0000:98:00.1: mlx_0_1 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:32.710 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:32.711 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:32.711 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:25:32.711 altname enp152s0f0np0 00:25:32.711 altname ens817f0np0 00:25:32.711 inet 192.168.100.8/24 scope global mlx_0_0 00:25:32.711 valid_lft forever preferred_lft forever 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:32.711 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:32.711 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:25:32.711 altname enp152s0f1np1 00:25:32.711 altname ens817f1np1 00:25:32.711 inet 192.168.100.9/24 scope global mlx_0_1 00:25:32.711 valid_lft forever preferred_lft forever 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:32.711 192.168.100.9' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:32.711 192.168.100.9' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:32.711 192.168.100.9' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:25:32.711 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:25:32.712 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:25:32.712 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:32.712 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:32.712 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:32.712 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:32.712 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:32.712 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:32.712 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:32.712 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:32.712 10:33:09 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:25:36.921 Waiting for block devices as requested 00:25:36.921 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:36.921 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:36.921 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:36.921 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:36.921 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:36.921 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:37.182 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:37.182 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:37.182 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:25:37.444 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:37.444 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:37.444 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:37.705 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:37.705 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:37.705 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:37.705 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:37.966 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:37.966 10:33:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:37.966 10:33:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:37.966 10:33:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:37.966 10:33:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:37.966 10:33:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:37.966 10:33:14 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:37.966 10:33:14 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:37.966 10:33:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:37.966 10:33:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:37.966 No valid GPT data, bailing 00:25:37.966 10:33:14 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:37.966 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -t rdma -s 4420 00:25:38.227 00:25:38.227 Discovery Log Number of Records 2, Generation counter 2 00:25:38.227 =====Discovery Log Entry 0====== 00:25:38.227 trtype: rdma 00:25:38.227 adrfam: ipv4 00:25:38.227 subtype: current discovery subsystem 00:25:38.227 treq: not specified, sq flow control disable supported 00:25:38.227 portid: 1 00:25:38.227 trsvcid: 4420 00:25:38.227 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:38.227 traddr: 192.168.100.8 00:25:38.227 eflags: none 00:25:38.227 rdma_prtype: not specified 00:25:38.227 rdma_qptype: connected 00:25:38.227 rdma_cms: rdma-cm 00:25:38.227 rdma_pkey: 0x0000 00:25:38.227 =====Discovery Log Entry 1====== 00:25:38.227 trtype: rdma 00:25:38.227 adrfam: ipv4 00:25:38.227 subtype: nvme subsystem 00:25:38.227 treq: not specified, sq flow control disable supported 00:25:38.227 portid: 1 00:25:38.227 trsvcid: 4420 00:25:38.227 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:38.227 traddr: 192.168.100.8 00:25:38.227 eflags: none 00:25:38.227 rdma_prtype: not specified 00:25:38.227 rdma_qptype: connected 00:25:38.227 rdma_cms: rdma-cm 00:25:38.227 rdma_pkey: 0x0000 00:25:38.227 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:25:38.227 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:38.227 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.227 ===================================================== 00:25:38.227 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:38.227 ===================================================== 00:25:38.227 Controller Capabilities/Features 00:25:38.227 ================================ 00:25:38.227 Vendor ID: 0000 00:25:38.227 Subsystem Vendor ID: 0000 00:25:38.227 Serial Number: 717b8a4a810aba9f9748 00:25:38.227 Model Number: Linux 00:25:38.227 Firmware Version: 6.7.0-68 00:25:38.227 Recommended Arb Burst: 0 00:25:38.227 IEEE OUI Identifier: 00 00 00 00:25:38.227 Multi-path I/O 00:25:38.227 May have multiple subsystem ports: No 00:25:38.227 May have multiple controllers: No 00:25:38.227 Associated with SR-IOV VF: No 00:25:38.227 Max Data Transfer Size: Unlimited 00:25:38.227 Max Number of Namespaces: 0 00:25:38.227 Max Number of I/O Queues: 1024 00:25:38.227 NVMe Specification Version (VS): 1.3 00:25:38.227 NVMe Specification Version (Identify): 1.3 00:25:38.227 Maximum Queue Entries: 128 00:25:38.227 Contiguous Queues Required: No 00:25:38.227 Arbitration Mechanisms Supported 00:25:38.227 Weighted Round Robin: Not Supported 00:25:38.227 Vendor Specific: Not Supported 00:25:38.227 Reset Timeout: 7500 ms 00:25:38.227 Doorbell Stride: 4 bytes 00:25:38.227 NVM Subsystem Reset: Not Supported 00:25:38.227 Command Sets Supported 00:25:38.227 NVM Command Set: Supported 00:25:38.227 Boot Partition: Not Supported 00:25:38.227 Memory Page Size Minimum: 4096 bytes 00:25:38.227 Memory Page Size Maximum: 4096 bytes 00:25:38.227 Persistent Memory Region: Not Supported 00:25:38.227 Optional Asynchronous Events Supported 00:25:38.227 Namespace Attribute Notices: Not Supported 00:25:38.227 Firmware Activation Notices: Not Supported 00:25:38.227 ANA Change Notices: Not Supported 00:25:38.227 PLE Aggregate Log Change Notices: Not Supported 00:25:38.227 LBA Status Info Alert Notices: Not Supported 00:25:38.227 EGE Aggregate Log Change Notices: Not Supported 00:25:38.227 Normal NVM Subsystem Shutdown event: Not Supported 00:25:38.227 Zone Descriptor Change Notices: Not Supported 00:25:38.227 Discovery Log Change Notices: Supported 00:25:38.227 Controller Attributes 00:25:38.228 128-bit Host Identifier: Not Supported 00:25:38.228 Non-Operational Permissive Mode: Not Supported 00:25:38.228 NVM Sets: Not Supported 00:25:38.228 Read Recovery Levels: Not Supported 00:25:38.228 Endurance Groups: Not Supported 00:25:38.228 Predictable Latency Mode: Not Supported 00:25:38.228 Traffic Based Keep ALive: Not Supported 00:25:38.228 Namespace Granularity: Not Supported 00:25:38.228 SQ Associations: Not Supported 00:25:38.228 UUID List: Not Supported 00:25:38.228 Multi-Domain Subsystem: Not Supported 00:25:38.228 Fixed Capacity Management: Not Supported 00:25:38.228 Variable Capacity Management: Not Supported 00:25:38.228 Delete Endurance Group: Not Supported 00:25:38.228 Delete NVM Set: Not Supported 00:25:38.228 Extended LBA Formats Supported: Not Supported 00:25:38.228 Flexible Data Placement Supported: Not Supported 00:25:38.228 00:25:38.228 Controller Memory Buffer Support 00:25:38.228 ================================ 00:25:38.228 Supported: No 00:25:38.228 00:25:38.228 Persistent Memory Region Support 00:25:38.228 ================================ 00:25:38.228 Supported: No 00:25:38.228 00:25:38.228 Admin Command Set Attributes 00:25:38.228 ============================ 00:25:38.228 Security Send/Receive: Not Supported 00:25:38.228 Format NVM: Not Supported 00:25:38.228 Firmware Activate/Download: Not Supported 00:25:38.228 Namespace Management: Not Supported 00:25:38.228 Device Self-Test: Not Supported 00:25:38.228 Directives: Not Supported 00:25:38.228 NVMe-MI: Not Supported 00:25:38.228 Virtualization Management: Not Supported 00:25:38.228 Doorbell Buffer Config: Not Supported 00:25:38.228 Get LBA Status Capability: Not Supported 00:25:38.228 Command & Feature Lockdown Capability: Not Supported 00:25:38.228 Abort Command Limit: 1 00:25:38.228 Async Event Request Limit: 1 00:25:38.228 Number of Firmware Slots: N/A 00:25:38.228 Firmware Slot 1 Read-Only: N/A 00:25:38.228 Firmware Activation Without Reset: N/A 00:25:38.228 Multiple Update Detection Support: N/A 00:25:38.228 Firmware Update Granularity: No Information Provided 00:25:38.228 Per-Namespace SMART Log: No 00:25:38.228 Asymmetric Namespace Access Log Page: Not Supported 00:25:38.228 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:38.228 Command Effects Log Page: Not Supported 00:25:38.228 Get Log Page Extended Data: Supported 00:25:38.228 Telemetry Log Pages: Not Supported 00:25:38.228 Persistent Event Log Pages: Not Supported 00:25:38.228 Supported Log Pages Log Page: May Support 00:25:38.228 Commands Supported & Effects Log Page: Not Supported 00:25:38.228 Feature Identifiers & Effects Log Page:May Support 00:25:38.228 NVMe-MI Commands & Effects Log Page: May Support 00:25:38.228 Data Area 4 for Telemetry Log: Not Supported 00:25:38.228 Error Log Page Entries Supported: 1 00:25:38.228 Keep Alive: Not Supported 00:25:38.228 00:25:38.228 NVM Command Set Attributes 00:25:38.228 ========================== 00:25:38.228 Submission Queue Entry Size 00:25:38.228 Max: 1 00:25:38.228 Min: 1 00:25:38.228 Completion Queue Entry Size 00:25:38.228 Max: 1 00:25:38.228 Min: 1 00:25:38.228 Number of Namespaces: 0 00:25:38.228 Compare Command: Not Supported 00:25:38.228 Write Uncorrectable Command: Not Supported 00:25:38.228 Dataset Management Command: Not Supported 00:25:38.228 Write Zeroes Command: Not Supported 00:25:38.228 Set Features Save Field: Not Supported 00:25:38.228 Reservations: Not Supported 00:25:38.228 Timestamp: Not Supported 00:25:38.228 Copy: Not Supported 00:25:38.228 Volatile Write Cache: Not Present 00:25:38.228 Atomic Write Unit (Normal): 1 00:25:38.228 Atomic Write Unit (PFail): 1 00:25:38.228 Atomic Compare & Write Unit: 1 00:25:38.228 Fused Compare & Write: Not Supported 00:25:38.228 Scatter-Gather List 00:25:38.228 SGL Command Set: Supported 00:25:38.228 SGL Keyed: Supported 00:25:38.228 SGL Bit Bucket Descriptor: Not Supported 00:25:38.228 SGL Metadata Pointer: Not Supported 00:25:38.228 Oversized SGL: Not Supported 00:25:38.228 SGL Metadata Address: Not Supported 00:25:38.228 SGL Offset: Supported 00:25:38.228 Transport SGL Data Block: Not Supported 00:25:38.228 Replay Protected Memory Block: Not Supported 00:25:38.228 00:25:38.228 Firmware Slot Information 00:25:38.228 ========================= 00:25:38.228 Active slot: 0 00:25:38.228 00:25:38.228 00:25:38.228 Error Log 00:25:38.228 ========= 00:25:38.228 00:25:38.228 Active Namespaces 00:25:38.228 ================= 00:25:38.228 Discovery Log Page 00:25:38.228 ================== 00:25:38.228 Generation Counter: 2 00:25:38.228 Number of Records: 2 00:25:38.228 Record Format: 0 00:25:38.228 00:25:38.228 Discovery Log Entry 0 00:25:38.228 ---------------------- 00:25:38.228 Transport Type: 1 (RDMA) 00:25:38.228 Address Family: 1 (IPv4) 00:25:38.228 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:38.228 Entry Flags: 00:25:38.228 Duplicate Returned Information: 0 00:25:38.228 Explicit Persistent Connection Support for Discovery: 0 00:25:38.228 Transport Requirements: 00:25:38.228 Secure Channel: Not Specified 00:25:38.228 Port ID: 1 (0x0001) 00:25:38.228 Controller ID: 65535 (0xffff) 00:25:38.228 Admin Max SQ Size: 32 00:25:38.228 Transport Service Identifier: 4420 00:25:38.228 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:38.228 Transport Address: 192.168.100.8 00:25:38.228 Transport Specific Address Subtype - RDMA 00:25:38.228 RDMA QP Service Type: 1 (Reliable Connected) 00:25:38.228 RDMA Provider Type: 1 (No provider specified) 00:25:38.228 RDMA CM Service: 1 (RDMA_CM) 00:25:38.228 Discovery Log Entry 1 00:25:38.228 ---------------------- 00:25:38.228 Transport Type: 1 (RDMA) 00:25:38.228 Address Family: 1 (IPv4) 00:25:38.228 Subsystem Type: 2 (NVM Subsystem) 00:25:38.228 Entry Flags: 00:25:38.228 Duplicate Returned Information: 0 00:25:38.228 Explicit Persistent Connection Support for Discovery: 0 00:25:38.228 Transport Requirements: 00:25:38.228 Secure Channel: Not Specified 00:25:38.228 Port ID: 1 (0x0001) 00:25:38.228 Controller ID: 65535 (0xffff) 00:25:38.228 Admin Max SQ Size: 32 00:25:38.228 Transport Service Identifier: 4420 00:25:38.228 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:38.228 Transport Address: 192.168.100.8 00:25:38.228 Transport Specific Address Subtype - RDMA 00:25:38.228 RDMA QP Service Type: 1 (Reliable Connected) 00:25:38.228 RDMA Provider Type: 1 (No provider specified) 00:25:38.228 RDMA CM Service: 1 (RDMA_CM) 00:25:38.228 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:38.228 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.490 get_feature(0x01) failed 00:25:38.490 get_feature(0x02) failed 00:25:38.490 get_feature(0x04) failed 00:25:38.490 ===================================================== 00:25:38.490 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:25:38.490 ===================================================== 00:25:38.490 Controller Capabilities/Features 00:25:38.490 ================================ 00:25:38.490 Vendor ID: 0000 00:25:38.490 Subsystem Vendor ID: 0000 00:25:38.490 Serial Number: 052c3a98429b276e08cd 00:25:38.490 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:38.490 Firmware Version: 6.7.0-68 00:25:38.490 Recommended Arb Burst: 6 00:25:38.490 IEEE OUI Identifier: 00 00 00 00:25:38.490 Multi-path I/O 00:25:38.490 May have multiple subsystem ports: Yes 00:25:38.490 May have multiple controllers: Yes 00:25:38.490 Associated with SR-IOV VF: No 00:25:38.490 Max Data Transfer Size: 1048576 00:25:38.490 Max Number of Namespaces: 1024 00:25:38.490 Max Number of I/O Queues: 128 00:25:38.490 NVMe Specification Version (VS): 1.3 00:25:38.490 NVMe Specification Version (Identify): 1.3 00:25:38.490 Maximum Queue Entries: 128 00:25:38.490 Contiguous Queues Required: No 00:25:38.490 Arbitration Mechanisms Supported 00:25:38.490 Weighted Round Robin: Not Supported 00:25:38.490 Vendor Specific: Not Supported 00:25:38.490 Reset Timeout: 7500 ms 00:25:38.490 Doorbell Stride: 4 bytes 00:25:38.490 NVM Subsystem Reset: Not Supported 00:25:38.490 Command Sets Supported 00:25:38.490 NVM Command Set: Supported 00:25:38.491 Boot Partition: Not Supported 00:25:38.491 Memory Page Size Minimum: 4096 bytes 00:25:38.491 Memory Page Size Maximum: 4096 bytes 00:25:38.491 Persistent Memory Region: Not Supported 00:25:38.491 Optional Asynchronous Events Supported 00:25:38.491 Namespace Attribute Notices: Supported 00:25:38.491 Firmware Activation Notices: Not Supported 00:25:38.491 ANA Change Notices: Supported 00:25:38.491 PLE Aggregate Log Change Notices: Not Supported 00:25:38.491 LBA Status Info Alert Notices: Not Supported 00:25:38.491 EGE Aggregate Log Change Notices: Not Supported 00:25:38.491 Normal NVM Subsystem Shutdown event: Not Supported 00:25:38.491 Zone Descriptor Change Notices: Not Supported 00:25:38.491 Discovery Log Change Notices: Not Supported 00:25:38.491 Controller Attributes 00:25:38.491 128-bit Host Identifier: Supported 00:25:38.491 Non-Operational Permissive Mode: Not Supported 00:25:38.491 NVM Sets: Not Supported 00:25:38.491 Read Recovery Levels: Not Supported 00:25:38.491 Endurance Groups: Not Supported 00:25:38.491 Predictable Latency Mode: Not Supported 00:25:38.491 Traffic Based Keep ALive: Supported 00:25:38.491 Namespace Granularity: Not Supported 00:25:38.491 SQ Associations: Not Supported 00:25:38.491 UUID List: Not Supported 00:25:38.491 Multi-Domain Subsystem: Not Supported 00:25:38.491 Fixed Capacity Management: Not Supported 00:25:38.491 Variable Capacity Management: Not Supported 00:25:38.491 Delete Endurance Group: Not Supported 00:25:38.491 Delete NVM Set: Not Supported 00:25:38.491 Extended LBA Formats Supported: Not Supported 00:25:38.491 Flexible Data Placement Supported: Not Supported 00:25:38.491 00:25:38.491 Controller Memory Buffer Support 00:25:38.491 ================================ 00:25:38.491 Supported: No 00:25:38.491 00:25:38.491 Persistent Memory Region Support 00:25:38.491 ================================ 00:25:38.491 Supported: No 00:25:38.491 00:25:38.491 Admin Command Set Attributes 00:25:38.491 ============================ 00:25:38.491 Security Send/Receive: Not Supported 00:25:38.491 Format NVM: Not Supported 00:25:38.491 Firmware Activate/Download: Not Supported 00:25:38.491 Namespace Management: Not Supported 00:25:38.491 Device Self-Test: Not Supported 00:25:38.491 Directives: Not Supported 00:25:38.491 NVMe-MI: Not Supported 00:25:38.491 Virtualization Management: Not Supported 00:25:38.491 Doorbell Buffer Config: Not Supported 00:25:38.491 Get LBA Status Capability: Not Supported 00:25:38.491 Command & Feature Lockdown Capability: Not Supported 00:25:38.491 Abort Command Limit: 4 00:25:38.491 Async Event Request Limit: 4 00:25:38.491 Number of Firmware Slots: N/A 00:25:38.491 Firmware Slot 1 Read-Only: N/A 00:25:38.491 Firmware Activation Without Reset: N/A 00:25:38.491 Multiple Update Detection Support: N/A 00:25:38.491 Firmware Update Granularity: No Information Provided 00:25:38.491 Per-Namespace SMART Log: Yes 00:25:38.491 Asymmetric Namespace Access Log Page: Supported 00:25:38.491 ANA Transition Time : 10 sec 00:25:38.491 00:25:38.491 Asymmetric Namespace Access Capabilities 00:25:38.491 ANA Optimized State : Supported 00:25:38.491 ANA Non-Optimized State : Supported 00:25:38.491 ANA Inaccessible State : Supported 00:25:38.491 ANA Persistent Loss State : Supported 00:25:38.491 ANA Change State : Supported 00:25:38.491 ANAGRPID is not changed : No 00:25:38.491 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:38.491 00:25:38.491 ANA Group Identifier Maximum : 128 00:25:38.491 Number of ANA Group Identifiers : 128 00:25:38.491 Max Number of Allowed Namespaces : 1024 00:25:38.491 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:38.491 Command Effects Log Page: Supported 00:25:38.491 Get Log Page Extended Data: Supported 00:25:38.491 Telemetry Log Pages: Not Supported 00:25:38.491 Persistent Event Log Pages: Not Supported 00:25:38.491 Supported Log Pages Log Page: May Support 00:25:38.491 Commands Supported & Effects Log Page: Not Supported 00:25:38.491 Feature Identifiers & Effects Log Page:May Support 00:25:38.491 NVMe-MI Commands & Effects Log Page: May Support 00:25:38.491 Data Area 4 for Telemetry Log: Not Supported 00:25:38.491 Error Log Page Entries Supported: 128 00:25:38.491 Keep Alive: Supported 00:25:38.491 Keep Alive Granularity: 1000 ms 00:25:38.491 00:25:38.491 NVM Command Set Attributes 00:25:38.491 ========================== 00:25:38.491 Submission Queue Entry Size 00:25:38.491 Max: 64 00:25:38.491 Min: 64 00:25:38.491 Completion Queue Entry Size 00:25:38.491 Max: 16 00:25:38.491 Min: 16 00:25:38.491 Number of Namespaces: 1024 00:25:38.491 Compare Command: Not Supported 00:25:38.491 Write Uncorrectable Command: Not Supported 00:25:38.491 Dataset Management Command: Supported 00:25:38.491 Write Zeroes Command: Supported 00:25:38.491 Set Features Save Field: Not Supported 00:25:38.491 Reservations: Not Supported 00:25:38.491 Timestamp: Not Supported 00:25:38.491 Copy: Not Supported 00:25:38.491 Volatile Write Cache: Present 00:25:38.491 Atomic Write Unit (Normal): 1 00:25:38.491 Atomic Write Unit (PFail): 1 00:25:38.491 Atomic Compare & Write Unit: 1 00:25:38.491 Fused Compare & Write: Not Supported 00:25:38.491 Scatter-Gather List 00:25:38.491 SGL Command Set: Supported 00:25:38.491 SGL Keyed: Supported 00:25:38.491 SGL Bit Bucket Descriptor: Not Supported 00:25:38.491 SGL Metadata Pointer: Not Supported 00:25:38.491 Oversized SGL: Not Supported 00:25:38.491 SGL Metadata Address: Not Supported 00:25:38.491 SGL Offset: Supported 00:25:38.491 Transport SGL Data Block: Not Supported 00:25:38.491 Replay Protected Memory Block: Not Supported 00:25:38.491 00:25:38.491 Firmware Slot Information 00:25:38.491 ========================= 00:25:38.491 Active slot: 0 00:25:38.491 00:25:38.491 Asymmetric Namespace Access 00:25:38.491 =========================== 00:25:38.491 Change Count : 0 00:25:38.491 Number of ANA Group Descriptors : 1 00:25:38.491 ANA Group Descriptor : 0 00:25:38.491 ANA Group ID : 1 00:25:38.491 Number of NSID Values : 1 00:25:38.491 Change Count : 0 00:25:38.491 ANA State : 1 00:25:38.491 Namespace Identifier : 1 00:25:38.491 00:25:38.491 Commands Supported and Effects 00:25:38.491 ============================== 00:25:38.491 Admin Commands 00:25:38.491 -------------- 00:25:38.491 Get Log Page (02h): Supported 00:25:38.491 Identify (06h): Supported 00:25:38.491 Abort (08h): Supported 00:25:38.491 Set Features (09h): Supported 00:25:38.491 Get Features (0Ah): Supported 00:25:38.491 Asynchronous Event Request (0Ch): Supported 00:25:38.491 Keep Alive (18h): Supported 00:25:38.491 I/O Commands 00:25:38.491 ------------ 00:25:38.491 Flush (00h): Supported 00:25:38.491 Write (01h): Supported LBA-Change 00:25:38.491 Read (02h): Supported 00:25:38.491 Write Zeroes (08h): Supported LBA-Change 00:25:38.491 Dataset Management (09h): Supported 00:25:38.491 00:25:38.491 Error Log 00:25:38.491 ========= 00:25:38.491 Entry: 0 00:25:38.491 Error Count: 0x3 00:25:38.491 Submission Queue Id: 0x0 00:25:38.491 Command Id: 0x5 00:25:38.491 Phase Bit: 0 00:25:38.491 Status Code: 0x2 00:25:38.491 Status Code Type: 0x0 00:25:38.491 Do Not Retry: 1 00:25:38.491 Error Location: 0x28 00:25:38.491 LBA: 0x0 00:25:38.491 Namespace: 0x0 00:25:38.491 Vendor Log Page: 0x0 00:25:38.491 ----------- 00:25:38.491 Entry: 1 00:25:38.491 Error Count: 0x2 00:25:38.491 Submission Queue Id: 0x0 00:25:38.491 Command Id: 0x5 00:25:38.491 Phase Bit: 0 00:25:38.491 Status Code: 0x2 00:25:38.491 Status Code Type: 0x0 00:25:38.491 Do Not Retry: 1 00:25:38.491 Error Location: 0x28 00:25:38.491 LBA: 0x0 00:25:38.491 Namespace: 0x0 00:25:38.491 Vendor Log Page: 0x0 00:25:38.491 ----------- 00:25:38.491 Entry: 2 00:25:38.491 Error Count: 0x1 00:25:38.491 Submission Queue Id: 0x0 00:25:38.491 Command Id: 0x0 00:25:38.491 Phase Bit: 0 00:25:38.491 Status Code: 0x2 00:25:38.491 Status Code Type: 0x0 00:25:38.491 Do Not Retry: 1 00:25:38.491 Error Location: 0x28 00:25:38.491 LBA: 0x0 00:25:38.491 Namespace: 0x0 00:25:38.491 Vendor Log Page: 0x0 00:25:38.491 00:25:38.491 Number of Queues 00:25:38.491 ================ 00:25:38.491 Number of I/O Submission Queues: 128 00:25:38.491 Number of I/O Completion Queues: 128 00:25:38.491 00:25:38.491 ZNS Specific Controller Data 00:25:38.491 ============================ 00:25:38.491 Zone Append Size Limit: 0 00:25:38.491 00:25:38.491 00:25:38.491 Active Namespaces 00:25:38.491 ================= 00:25:38.491 get_feature(0x05) failed 00:25:38.491 Namespace ID:1 00:25:38.491 Command Set Identifier: NVM (00h) 00:25:38.491 Deallocate: Supported 00:25:38.491 Deallocated/Unwritten Error: Not Supported 00:25:38.491 Deallocated Read Value: Unknown 00:25:38.491 Deallocate in Write Zeroes: Not Supported 00:25:38.491 Deallocated Guard Field: 0xFFFF 00:25:38.491 Flush: Supported 00:25:38.491 Reservation: Not Supported 00:25:38.491 Namespace Sharing Capabilities: Multiple Controllers 00:25:38.491 Size (in LBAs): 3750748848 (1788GiB) 00:25:38.492 Capacity (in LBAs): 3750748848 (1788GiB) 00:25:38.492 Utilization (in LBAs): 3750748848 (1788GiB) 00:25:38.492 UUID: 88370404-ae89-4b70-bd29-043dd0c29cb8 00:25:38.492 Thin Provisioning: Not Supported 00:25:38.492 Per-NS Atomic Units: Yes 00:25:38.492 Atomic Write Unit (Normal): 8 00:25:38.492 Atomic Write Unit (PFail): 8 00:25:38.492 Preferred Write Granularity: 8 00:25:38.492 Atomic Compare & Write Unit: 8 00:25:38.492 Atomic Boundary Size (Normal): 0 00:25:38.492 Atomic Boundary Size (PFail): 0 00:25:38.492 Atomic Boundary Offset: 0 00:25:38.492 NGUID/EUI64 Never Reused: No 00:25:38.492 ANA group ID: 1 00:25:38.492 Namespace Write Protected: No 00:25:38.492 Number of LBA Formats: 1 00:25:38.492 Current LBA Format: LBA Format #00 00:25:38.492 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:38.492 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:38.492 rmmod nvme_rdma 00:25:38.492 rmmod nvme_fabrics 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:25:38.492 10:33:15 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:25:42.699 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:25:42.699 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:25:44.086 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:25:44.348 00:25:44.348 real 0m19.722s 00:25:44.348 user 0m5.629s 00:25:44.348 sys 0m11.482s 00:25:44.348 10:33:21 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:44.348 10:33:21 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:44.348 ************************************ 00:25:44.348 END TEST nvmf_identify_kernel_target 00:25:44.348 ************************************ 00:25:44.348 10:33:21 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:25:44.348 10:33:21 nvmf_rdma -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:44.348 10:33:21 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:44.348 10:33:21 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.348 10:33:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:44.348 ************************************ 00:25:44.348 START TEST nvmf_auth_host 00:25:44.348 ************************************ 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:25:44.348 * Looking for test storage... 00:25:44.348 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.348 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:44.349 10:33:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:25:52.494 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:25:52.494 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:25:52.494 Found net devices under 0000:98:00.0: mlx_0_0 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:25:52.494 Found net devices under 0000:98:00.1: mlx_0_1 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:25:52.494 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:25:52.495 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:52.495 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:25:52.495 altname enp152s0f0np0 00:25:52.495 altname ens817f0np0 00:25:52.495 inet 192.168.100.8/24 scope global mlx_0_0 00:25:52.495 valid_lft forever preferred_lft forever 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:25:52.495 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:52.495 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:25:52.495 altname enp152s0f1np1 00:25:52.495 altname ens817f1np1 00:25:52.495 inet 192.168.100.9/24 scope global mlx_0_1 00:25:52.495 valid_lft forever preferred_lft forever 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:25:52.495 192.168.100.9' 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:25:52.495 192.168.100.9' 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:52.495 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:25:52.496 192.168.100.9' 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3067271 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3067271 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3067271 ']' 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:52.496 10:33:29 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=722bb636b30089f7dde32c64074b3824 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.rOc 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 722bb636b30089f7dde32c64074b3824 0 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 722bb636b30089f7dde32c64074b3824 0 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=722bb636b30089f7dde32c64074b3824 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.rOc 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.rOc 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.rOc 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0314f903f4ce904d3b46f5b96232bea4c82a0bebb865d442265c418ac3350138 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.6NV 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0314f903f4ce904d3b46f5b96232bea4c82a0bebb865d442265c418ac3350138 3 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0314f903f4ce904d3b46f5b96232bea4c82a0bebb865d442265c418ac3350138 3 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0314f903f4ce904d3b46f5b96232bea4c82a0bebb865d442265c418ac3350138 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:53.437 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.6NV 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.6NV 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.6NV 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=03c64c0483a1b25bda424f4871b7982ba811541d7664a4f0 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uww 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 03c64c0483a1b25bda424f4871b7982ba811541d7664a4f0 0 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 03c64c0483a1b25bda424f4871b7982ba811541d7664a4f0 0 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=03c64c0483a1b25bda424f4871b7982ba811541d7664a4f0 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uww 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uww 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.uww 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=59031d47ee75e796c3b1311481c20fd30ae283150fed6416 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:53.698 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.6hO 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 59031d47ee75e796c3b1311481c20fd30ae283150fed6416 2 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 59031d47ee75e796c3b1311481c20fd30ae283150fed6416 2 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=59031d47ee75e796c3b1311481c20fd30ae283150fed6416 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.6hO 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.6hO 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.6hO 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f7fcc982783e2769f799d3686aee3103 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.T0Z 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f7fcc982783e2769f799d3686aee3103 1 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f7fcc982783e2769f799d3686aee3103 1 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f7fcc982783e2769f799d3686aee3103 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.T0Z 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.T0Z 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.T0Z 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a42322388defa302902337b4fa248708 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6jb 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a42322388defa302902337b4fa248708 1 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a42322388defa302902337b4fa248708 1 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a42322388defa302902337b4fa248708 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6jb 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6jb 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.6jb 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:53.699 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9064293be3d891a590c2f6aee98385c95673cd8242e53eb2 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.n8n 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9064293be3d891a590c2f6aee98385c95673cd8242e53eb2 2 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9064293be3d891a590c2f6aee98385c95673cd8242e53eb2 2 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9064293be3d891a590c2f6aee98385c95673cd8242e53eb2 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.n8n 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.n8n 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.n8n 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=62685ab4f1e9e2d5b79109bce6edb46b 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xL8 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 62685ab4f1e9e2d5b79109bce6edb46b 0 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 62685ab4f1e9e2d5b79109bce6edb46b 0 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=62685ab4f1e9e2d5b79109bce6edb46b 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:53.960 10:33:30 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xL8 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xL8 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.xL8 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e8f4281223fd173bffe8669bfcd470a02d3f95cf9570a146f967fd6b42684b83 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KTu 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e8f4281223fd173bffe8669bfcd470a02d3f95cf9570a146f967fd6b42684b83 3 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e8f4281223fd173bffe8669bfcd470a02d3f95cf9570a146f967fd6b42684b83 3 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e8f4281223fd173bffe8669bfcd470a02d3f95cf9570a146f967fd6b42684b83 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KTu 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KTu 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.KTu 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3067271 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3067271 ']' 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:53.960 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rOc 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.6NV ]] 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6NV 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.uww 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.6hO ]] 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.6hO 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.T0Z 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.6jb ]] 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6jb 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.n8n 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.222 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.xL8 ]] 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.xL8 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.KTu 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:54.223 10:33:31 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:25:58.424 Waiting for block devices as requested 00:25:58.424 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:58.424 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:58.424 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:58.424 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:58.424 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:58.424 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:58.424 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:58.424 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:58.728 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:25:58.728 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:25:58.728 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:25:59.032 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:25:59.032 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:25:59.032 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:25:59.032 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:25:59.294 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:25:59.294 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:25:59.862 10:33:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:59.862 10:33:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:59.862 10:33:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:59.862 10:33:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:59.862 10:33:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:59.862 10:33:36 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:59.862 10:33:36 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:59.862 10:33:36 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:59.862 10:33:36 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:59.862 No valid GPT data, bailing 00:25:59.862 10:33:37 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:59.862 10:33:37 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:59.862 10:33:37 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:59.862 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:59.862 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:59.862 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:59.862 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:59.862 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 192.168.100.8 -t rdma -s 4420 00:26:00.121 00:26:00.121 Discovery Log Number of Records 2, Generation counter 2 00:26:00.121 =====Discovery Log Entry 0====== 00:26:00.121 trtype: rdma 00:26:00.121 adrfam: ipv4 00:26:00.121 subtype: current discovery subsystem 00:26:00.121 treq: not specified, sq flow control disable supported 00:26:00.121 portid: 1 00:26:00.121 trsvcid: 4420 00:26:00.121 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:00.121 traddr: 192.168.100.8 00:26:00.121 eflags: none 00:26:00.121 rdma_prtype: not specified 00:26:00.121 rdma_qptype: connected 00:26:00.121 rdma_cms: rdma-cm 00:26:00.121 rdma_pkey: 0x0000 00:26:00.121 =====Discovery Log Entry 1====== 00:26:00.121 trtype: rdma 00:26:00.121 adrfam: ipv4 00:26:00.121 subtype: nvme subsystem 00:26:00.121 treq: not specified, sq flow control disable supported 00:26:00.121 portid: 1 00:26:00.121 trsvcid: 4420 00:26:00.121 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:00.121 traddr: 192.168.100.8 00:26:00.121 eflags: none 00:26:00.121 rdma_prtype: not specified 00:26:00.121 rdma_qptype: connected 00:26:00.121 rdma_cms: rdma-cm 00:26:00.121 rdma_pkey: 0x0000 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:00.121 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.122 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.381 nvme0n1 00:26:00.381 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.381 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.381 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.381 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.381 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.382 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.642 nvme0n1 00:26:00.642 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.642 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.643 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.643 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.643 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.643 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.643 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.643 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.643 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.643 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:00.903 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.904 10:33:37 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.164 nvme0n1 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.164 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.165 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.165 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:01.165 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:01.165 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:01.165 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:01.165 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:01.165 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.165 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.165 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.425 nvme0n1 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.425 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.687 nvme0n1 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.687 10:33:38 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.947 nvme0n1 00:26:01.947 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.948 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.948 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:01.948 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.948 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.948 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.948 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.948 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:01.948 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.948 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.208 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.469 nvme0n1 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.469 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.729 nvme0n1 00:26:02.729 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.729 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.729 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:02.729 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.729 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.729 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.729 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:02.729 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:02.729 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.729 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.730 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.990 10:33:39 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.250 nvme0n1 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:03.250 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.251 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.511 nvme0n1 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:03.511 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.512 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.772 nvme0n1 00:26:03.772 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.772 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.772 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:03.772 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.772 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.772 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.033 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.033 10:33:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.033 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.033 10:33:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.033 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.294 nvme0n1 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.294 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.555 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.816 nvme0n1 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.817 10:33:41 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.387 nvme0n1 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.387 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.388 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.648 nvme0n1 00:26:05.648 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.648 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:05.648 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:05.648 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.648 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.648 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.648 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:05.648 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:05.648 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.648 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.909 10:33:42 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.170 nvme0n1 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.170 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.739 nvme0n1 00:26:06.739 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.739 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:06.739 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:06.739 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.739 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.739 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.739 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:06.739 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:06.739 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.739 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.999 10:33:43 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.568 nvme0n1 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:07.568 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.569 10:33:44 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.139 nvme0n1 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.139 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.709 nvme0n1 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.709 10:33:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.280 nvme0n1 00:26:09.280 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.280 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:09.280 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:09.280 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.280 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.280 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.280 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:09.280 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:09.280 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.280 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:09.541 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.542 10:33:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.482 nvme0n1 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.482 10:33:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.422 nvme0n1 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:11.422 10:33:48 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.360 nvme0n1 00:26:12.360 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.360 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:12.360 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:12.360 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.360 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.360 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.361 10:33:49 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.300 nvme0n1 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.300 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.301 10:33:50 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.238 nvme0n1 00:26:14.238 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.238 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.238 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.238 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.238 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.238 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.238 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.238 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.239 nvme0n1 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.239 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.500 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.762 nvme0n1 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.762 10:33:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.022 nvme0n1 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.023 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.283 nvme0n1 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:15.283 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:15.284 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.284 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.543 nvme0n1 00:26:15.543 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.543 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:15.543 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:15.543 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.543 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.543 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.803 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:15.803 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:15.803 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.803 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.803 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.803 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:15.803 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:15.803 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:15.804 10:33:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.064 nvme0n1 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.064 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.325 nvme0n1 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.325 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.585 nvme0n1 00:26:16.585 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.585 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:16.585 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.585 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:16.585 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.585 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:16.845 10:33:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.105 nvme0n1 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.105 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.106 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.366 nvme0n1 00:26:17.366 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.366 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.366 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.366 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.366 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.366 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.366 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.366 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.366 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.366 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.628 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.628 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:17.628 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.628 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:17.628 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.628 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.628 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.628 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:17.628 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:17.628 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:17.628 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.629 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.890 nvme0n1 00:26:17.890 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.890 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:17.890 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:17.890 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.890 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.890 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.890 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:17.890 10:33:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.890 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.890 10:33:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.890 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.460 nvme0n1 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.460 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.719 nvme0n1 00:26:18.719 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.719 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:18.719 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:18.719 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.719 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.719 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.719 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:18.719 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:18.719 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.719 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:18.980 10:33:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.241 nvme0n1 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.241 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.812 nvme0n1 00:26:19.812 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.812 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:19.812 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:19.812 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.812 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.812 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.812 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.813 10:33:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.383 nvme0n1 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.383 10:33:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.954 nvme0n1 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:20.954 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.526 nvme0n1 00:26:21.526 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.526 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.526 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.526 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.526 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.526 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.788 10:33:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.358 nvme0n1 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:22.358 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.359 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.928 nvme0n1 00:26:22.928 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.928 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.928 10:33:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.928 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.928 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.928 10:33:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:22.928 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.929 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.870 nvme0n1 00:26:23.870 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.870 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.870 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.870 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.870 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.870 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.870 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.870 10:34:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.870 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.870 10:34:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:23.870 10:34:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:23.871 10:34:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:23.871 10:34:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.871 10:34:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.871 10:34:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:23.871 10:34:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:23.871 10:34:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:23.871 10:34:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:23.871 10:34:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:23.871 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:23.871 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:23.871 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.810 nvme0n1 00:26:24.810 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.810 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.810 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.810 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.810 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.810 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:24.811 10:34:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:24.811 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.792 nvme0n1 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:25.792 10:34:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.767 nvme0n1 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.767 10:34:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.706 nvme0n1 00:26:27.706 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.707 10:34:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.967 nvme0n1 00:26:27.967 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.967 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.967 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.967 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.967 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.967 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.967 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.967 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.967 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.967 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:27.968 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.229 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:28.229 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.229 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.229 nvme0n1 00:26:28.229 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.229 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.229 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.229 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.229 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.229 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.490 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.751 nvme0n1 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:28.751 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.752 10:34:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.013 nvme0n1 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.013 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.275 nvme0n1 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.275 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.535 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.535 nvme0n1 00:26:29.535 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:29.796 10:34:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:29.797 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.797 10:34:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.058 nvme0n1 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.058 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.319 nvme0n1 00:26:30.319 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.319 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.319 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.319 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.319 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.319 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.319 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.319 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.319 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.319 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.580 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:30.581 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:30.581 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:30.581 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:30.581 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:30.581 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.581 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.581 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.841 nvme0n1 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.841 10:34:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.102 nvme0n1 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.102 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.103 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.674 nvme0n1 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.674 10:34:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.934 nvme0n1 00:26:31.934 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.934 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.934 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.934 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.934 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.934 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.195 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.456 nvme0n1 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.457 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.717 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.717 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.717 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.717 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.717 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:32.717 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:32.718 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:32.718 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:32.718 10:34:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:32.718 10:34:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:32.718 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.718 10:34:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.977 nvme0n1 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:32.977 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.978 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.546 nvme0n1 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.546 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.547 10:34:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.117 nvme0n1 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.117 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.687 nvme0n1 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.687 10:34:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.256 nvme0n1 00:26:35.256 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.256 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.256 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.256 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.257 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.257 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.257 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.257 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.257 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.257 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.516 10:34:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.084 nvme0n1 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.084 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.085 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.655 nvme0n1 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzIyYmI2MzZiMzAwODlmN2RkZTMyYzY0MDc0YjM4MjS+gdJ6: 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: ]] 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDMxNGY5MDNmNGNlOTA0ZDNiNDZmNWI5NjIzMmJlYTRjODJhMGJlYmI4NjVkNDQyMjY1YzQxOGFjMzM1MDEzOFEPKVM=: 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.655 10:34:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.596 nvme0n1 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:37.596 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:37.597 10:34:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.537 nvme0n1 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:38.537 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjdmY2M5ODI3ODNlMjc2OWY3OTlkMzY4NmFlZTMxMDN8KQKJ: 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: ]] 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTQyMzIyMzg4ZGVmYTMwMjkwMjMzN2I0ZmEyNDg3MDhJ0Uvw: 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.538 10:34:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.481 nvme0n1 00:26:39.481 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.481 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTA2NDI5M2JlM2Q4OTFhNTkwYzJmNmFlZTk4Mzg1Yzk1NjczY2Q4MjQyZTUzZWIyOuxDtw==: 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: ]] 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjI2ODVhYjRmMWU5ZTJkNWI3OTEwOWJjZTZlZGI0NmISBty3: 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:39.482 10:34:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.425 nvme0n1 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZThmNDI4MTIyM2ZkMTczYmZmZTg2NjliZmNkNDcwYTAyZDNmOTVjZjk1NzBhMTQ2Zjk2N2ZkNmI0MjY4NGI4M6+Fj4w=: 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.425 10:34:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.370 nvme0n1 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDNjNjRjMDQ4M2ExYjI1YmRhNDI0ZjQ4NzFiNzk4MmJhODExNTQxZDc2NjRhNGYwcKa7Qg==: 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTkwMzFkNDdlZTc1ZTc5NmMzYjEzMTE0ODFjMjBmZDMwYWUyODMxNTBmZWQ2NDE2bZEhIA==: 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.371 request: 00:26:41.371 { 00:26:41.371 "name": "nvme0", 00:26:41.371 "trtype": "rdma", 00:26:41.371 "traddr": "192.168.100.8", 00:26:41.371 "adrfam": "ipv4", 00:26:41.371 "trsvcid": "4420", 00:26:41.371 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:41.371 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:41.371 "prchk_reftag": false, 00:26:41.371 "prchk_guard": false, 00:26:41.371 "hdgst": false, 00:26:41.371 "ddgst": false, 00:26:41.371 "method": "bdev_nvme_attach_controller", 00:26:41.371 "req_id": 1 00:26:41.371 } 00:26:41.371 Got JSON-RPC error response 00:26:41.371 response: 00:26:41.371 { 00:26:41.371 "code": -5, 00:26:41.371 "message": "Input/output error" 00:26:41.371 } 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.371 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.633 request: 00:26:41.633 { 00:26:41.633 "name": "nvme0", 00:26:41.633 "trtype": "rdma", 00:26:41.633 "traddr": "192.168.100.8", 00:26:41.633 "adrfam": "ipv4", 00:26:41.633 "trsvcid": "4420", 00:26:41.633 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:41.633 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:41.633 "prchk_reftag": false, 00:26:41.633 "prchk_guard": false, 00:26:41.633 "hdgst": false, 00:26:41.633 "ddgst": false, 00:26:41.633 "dhchap_key": "key2", 00:26:41.633 "method": "bdev_nvme_attach_controller", 00:26:41.633 "req_id": 1 00:26:41.633 } 00:26:41.633 Got JSON-RPC error response 00:26:41.633 response: 00:26:41.633 { 00:26:41.633 "code": -5, 00:26:41.633 "message": "Input/output error" 00:26:41.633 } 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.633 request: 00:26:41.633 { 00:26:41.633 "name": "nvme0", 00:26:41.633 "trtype": "rdma", 00:26:41.633 "traddr": "192.168.100.8", 00:26:41.633 "adrfam": "ipv4", 00:26:41.633 "trsvcid": "4420", 00:26:41.633 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:41.633 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:41.633 "prchk_reftag": false, 00:26:41.633 "prchk_guard": false, 00:26:41.633 "hdgst": false, 00:26:41.633 "ddgst": false, 00:26:41.633 "dhchap_key": "key1", 00:26:41.633 "dhchap_ctrlr_key": "ckey2", 00:26:41.633 "method": "bdev_nvme_attach_controller", 00:26:41.633 "req_id": 1 00:26:41.633 } 00:26:41.633 Got JSON-RPC error response 00:26:41.633 response: 00:26:41.633 { 00:26:41.633 "code": -5, 00:26:41.633 "message": "Input/output error" 00:26:41.633 } 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:41.633 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:26:41.633 rmmod nvme_rdma 00:26:41.895 rmmod nvme_fabrics 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3067271 ']' 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3067271 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3067271 ']' 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3067271 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3067271 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3067271' 00:26:41.895 killing process with pid 3067271 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3067271 00:26:41.895 10:34:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3067271 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:41.895 10:34:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:26:42.157 10:34:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:46.363 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:26:46.363 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:26:46.363 10:34:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.rOc /tmp/spdk.key-null.uww /tmp/spdk.key-sha256.T0Z /tmp/spdk.key-sha384.n8n /tmp/spdk.key-sha512.KTu /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:26:46.363 10:34:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:26:49.663 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:26:49.663 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:26:49.663 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:26:49.923 00:26:49.923 real 1m5.565s 00:26:49.923 user 0m59.712s 00:26:49.923 sys 0m16.290s 00:26:49.924 10:34:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:49.924 10:34:26 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.924 ************************************ 00:26:49.924 END TEST nvmf_auth_host 00:26:49.924 ************************************ 00:26:49.924 10:34:26 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:26:49.924 10:34:26 nvmf_rdma -- nvmf/nvmf.sh@107 -- # [[ rdma == \t\c\p ]] 00:26:49.924 10:34:26 nvmf_rdma -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:26:49.924 10:34:26 nvmf_rdma -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:26:49.924 10:34:26 nvmf_rdma -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:26:49.924 10:34:26 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:49.924 10:34:26 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:49.924 10:34:26 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.924 10:34:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:26:49.924 ************************************ 00:26:49.924 START TEST nvmf_bdevperf 00:26:49.924 ************************************ 00:26:49.924 10:34:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:26:49.924 * Looking for test storage... 00:26:49.924 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:26:49.924 10:34:27 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:50.185 10:34:27 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:26:58.325 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:26:58.325 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:26:58.325 Found net devices under 0000:98:00.0: mlx_0_0 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:26:58.325 Found net devices under 0000:98:00.1: mlx_0_1 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:26:58.325 10:34:34 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.325 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:26:58.326 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:58.326 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:26:58.326 altname enp152s0f0np0 00:26:58.326 altname ens817f0np0 00:26:58.326 inet 192.168.100.8/24 scope global mlx_0_0 00:26:58.326 valid_lft forever preferred_lft forever 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:26:58.326 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:58.326 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:26:58.326 altname enp152s0f1np1 00:26:58.326 altname ens817f1np1 00:26:58.326 inet 192.168.100.9/24 scope global mlx_0_1 00:26:58.326 valid_lft forever preferred_lft forever 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:26:58.326 192.168.100.9' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:26:58.326 192.168.100.9' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:26:58.326 192.168.100.9' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3085913 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3085913 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3085913 ']' 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:58.326 10:34:35 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.326 [2024-07-15 10:34:35.261829] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:58.326 [2024-07-15 10:34:35.261890] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.326 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.326 [2024-07-15 10:34:35.346674] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:58.326 [2024-07-15 10:34:35.414447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.326 [2024-07-15 10:34:35.414490] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.326 [2024-07-15 10:34:35.414497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.326 [2024-07-15 10:34:35.414506] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.326 [2024-07-15 10:34:35.414512] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.326 [2024-07-15 10:34:35.414637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:58.326 [2024-07-15 10:34:35.414793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.326 [2024-07-15 10:34:35.414794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:58.898 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:58.898 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:58.898 10:34:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:58.898 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:58.898 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:58.898 10:34:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:58.898 10:34:36 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:58.898 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.898 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.159 [2024-07-15 10:34:36.106932] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d15920/0x1d19e10) succeed. 00:26:59.159 [2024-07-15 10:34:36.119918] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d16ec0/0x1d5b4a0) succeed. 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.159 Malloc0 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:59.159 [2024-07-15 10:34:36.274201] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.159 { 00:26:59.159 "params": { 00:26:59.159 "name": "Nvme$subsystem", 00:26:59.159 "trtype": "$TEST_TRANSPORT", 00:26:59.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.159 "adrfam": "ipv4", 00:26:59.159 "trsvcid": "$NVMF_PORT", 00:26:59.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.159 "hdgst": ${hdgst:-false}, 00:26:59.159 "ddgst": ${ddgst:-false} 00:26:59.159 }, 00:26:59.159 "method": "bdev_nvme_attach_controller" 00:26:59.159 } 00:26:59.159 EOF 00:26:59.159 )") 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:59.159 10:34:36 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:59.159 "params": { 00:26:59.159 "name": "Nvme1", 00:26:59.159 "trtype": "rdma", 00:26:59.159 "traddr": "192.168.100.8", 00:26:59.159 "adrfam": "ipv4", 00:26:59.159 "trsvcid": "4420", 00:26:59.159 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.159 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.159 "hdgst": false, 00:26:59.159 "ddgst": false 00:26:59.159 }, 00:26:59.159 "method": "bdev_nvme_attach_controller" 00:26:59.159 }' 00:26:59.159 [2024-07-15 10:34:36.334274] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:26:59.159 [2024-07-15 10:34:36.334328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086260 ] 00:26:59.419 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.419 [2024-07-15 10:34:36.400510] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.419 [2024-07-15 10:34:36.465162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.679 Running I/O for 1 seconds... 00:27:00.621 00:27:00.621 Latency(us) 00:27:00.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.621 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:00.621 Verification LBA range: start 0x0 length 0x4000 00:27:00.621 Nvme1n1 : 1.01 14237.07 55.61 0.00 0.00 8933.72 3194.88 22063.79 00:27:00.621 =================================================================================================================== 00:27:00.621 Total : 14237.07 55.61 0.00 0.00 8933.72 3194.88 22063.79 00:27:00.621 10:34:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3086538 00:27:00.621 10:34:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:00.621 10:34:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:00.621 10:34:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:00.621 10:34:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:27:00.621 10:34:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:27:00.621 10:34:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:00.621 10:34:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:00.621 { 00:27:00.621 "params": { 00:27:00.621 "name": "Nvme$subsystem", 00:27:00.621 "trtype": "$TEST_TRANSPORT", 00:27:00.621 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.621 "adrfam": "ipv4", 00:27:00.621 "trsvcid": "$NVMF_PORT", 00:27:00.621 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.621 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.621 "hdgst": ${hdgst:-false}, 00:27:00.621 "ddgst": ${ddgst:-false} 00:27:00.621 }, 00:27:00.622 "method": "bdev_nvme_attach_controller" 00:27:00.622 } 00:27:00.622 EOF 00:27:00.622 )") 00:27:00.622 10:34:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:27:00.882 10:34:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:27:00.882 10:34:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:27:00.882 10:34:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:00.882 "params": { 00:27:00.882 "name": "Nvme1", 00:27:00.882 "trtype": "rdma", 00:27:00.882 "traddr": "192.168.100.8", 00:27:00.882 "adrfam": "ipv4", 00:27:00.882 "trsvcid": "4420", 00:27:00.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:00.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:00.882 "hdgst": false, 00:27:00.882 "ddgst": false 00:27:00.882 }, 00:27:00.882 "method": "bdev_nvme_attach_controller" 00:27:00.882 }' 00:27:00.882 [2024-07-15 10:34:37.856975] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:00.882 [2024-07-15 10:34:37.857036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086538 ] 00:27:00.882 EAL: No free 2048 kB hugepages reported on node 1 00:27:00.882 [2024-07-15 10:34:37.921489] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.882 [2024-07-15 10:34:37.985723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.148 Running I/O for 15 seconds... 00:27:03.780 10:34:40 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3085913 00:27:03.780 10:34:40 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:04.725 [2024-07-15 10:34:41.846972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.725 [2024-07-15 10:34:41.847017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.725 [2024-07-15 10:34:41.847037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.725 [2024-07-15 10:34:41.847045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.725 [2024-07-15 10:34:41.847055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.725 [2024-07-15 10:34:41.847062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.725 [2024-07-15 10:34:41.847071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.725 [2024-07-15 10:34:41.847079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.725 [2024-07-15 10:34:41.847088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.725 [2024-07-15 10:34:41.847095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.725 [2024-07-15 10:34:41.847104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.725 [2024-07-15 10:34:41.847111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.725 [2024-07-15 10:34:41.847120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.725 [2024-07-15 10:34:41.847127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.725 [2024-07-15 10:34:41.847137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.725 [2024-07-15 10:34:41.847144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.725 [2024-07-15 10:34:41.847153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.725 [2024-07-15 10:34:41.847160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.725 [2024-07-15 10:34:41.847169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.725 [2024-07-15 10:34:41.847176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.725 [2024-07-15 10:34:41.847185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.725 [2024-07-15 10:34:41.847197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.726 [2024-07-15 10:34:41.847872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.726 [2024-07-15 10:34:41.847879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.847888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.847895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.847904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.847911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.847920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.847927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.847936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.847943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.847951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.847958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.847967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.847974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.847983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.847990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.727 [2024-07-15 10:34:41.848557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.727 [2024-07-15 10:34:41.848567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.848984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.848992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.849001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.849008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.849017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:04.728 [2024-07-15 10:34:41.849024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.849034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x182f00 00:27:04.728 [2024-07-15 10:34:41.849041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.849050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x182f00 00:27:04.728 [2024-07-15 10:34:41.849058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.849067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x182f00 00:27:04.728 [2024-07-15 10:34:41.849074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:38663000 sqhd:52b0 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.851399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:04.728 [2024-07-15 10:34:41.851411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:04.728 [2024-07-15 10:34:41.851418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99352 len:8 PRP1 0x0 PRP2 0x0 00:27:04.728 [2024-07-15 10:34:41.851426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:04.728 [2024-07-15 10:34:41.851459] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:27:04.728 [2024-07-15 10:34:41.855110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.728 [2024-07-15 10:34:41.875143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:04.728 [2024-07-15 10:34:41.879508] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:04.728 [2024-07-15 10:34:41.879527] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:04.728 [2024-07-15 10:34:41.879533] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:06.115 [2024-07-15 10:34:42.883780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:06.115 [2024-07-15 10:34:42.883801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.115 [2024-07-15 10:34:42.884019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.115 [2024-07-15 10:34:42.884028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.115 [2024-07-15 10:34:42.884036] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:06.115 [2024-07-15 10:34:42.884469] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:06.115 [2024-07-15 10:34:42.887557] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.115 [2024-07-15 10:34:42.898323] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.115 [2024-07-15 10:34:42.901714] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:06.115 [2024-07-15 10:34:42.901732] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:06.115 [2024-07-15 10:34:42.901739] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:06.685 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3085913 Killed "${NVMF_APP[@]}" "$@" 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3087618 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3087618 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3087618 ']' 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:06.685 10:34:43 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.685 [2024-07-15 10:34:43.876023] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:06.685 [2024-07-15 10:34:43.876063] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.946 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.946 [2024-07-15 10:34:43.905955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:06.946 [2024-07-15 10:34:43.905974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:06.946 [2024-07-15 10:34:43.906191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:06.946 [2024-07-15 10:34:43.906200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:06.946 [2024-07-15 10:34:43.906208] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:06.946 [2024-07-15 10:34:43.907260] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:06.946 [2024-07-15 10:34:43.909723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:06.946 [2024-07-15 10:34:43.921052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:06.946 [2024-07-15 10:34:43.923986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:06.946 [2024-07-15 10:34:43.924421] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:06.946 [2024-07-15 10:34:43.924443] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:06.946 [2024-07-15 10:34:43.924449] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:06.946 [2024-07-15 10:34:43.977525] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.946 [2024-07-15 10:34:43.977558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.946 [2024-07-15 10:34:43.977564] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.946 [2024-07-15 10:34:43.977568] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.946 [2024-07-15 10:34:43.977572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.946 [2024-07-15 10:34:43.977683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:06.946 [2024-07-15 10:34:43.977831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:06.946 [2024-07-15 10:34:43.977832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.518 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:07.518 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:07.518 10:34:44 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:07.518 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:07.518 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.518 10:34:44 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.518 10:34:44 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:07.518 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.518 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.779 [2024-07-15 10:34:44.731706] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18e4920/0x18e8e10) succeed. 00:27:07.779 [2024-07-15 10:34:44.744610] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18e5ec0/0x192a4a0) succeed. 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.779 Malloc0 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:07.779 [2024-07-15 10:34:44.880114] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.779 10:34:44 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3086538 00:27:07.779 [2024-07-15 10:34:44.928876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:07.779 [2024-07-15 10:34:44.928897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:07.779 [2024-07-15 10:34:44.929115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:07.779 [2024-07-15 10:34:44.929124] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:07.779 [2024-07-15 10:34:44.929132] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:27:07.779 [2024-07-15 10:34:44.931209] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:07.779 [2024-07-15 10:34:44.932644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:07.779 [2024-07-15 10:34:44.945005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:08.043 [2024-07-15 10:34:45.000548] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:16.187 00:27:16.187 Latency(us) 00:27:16.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.187 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:16.187 Verification LBA range: start 0x0 length 0x4000 00:27:16.188 Nvme1n1 : 15.00 12135.91 47.41 7916.23 0.00 6358.27 344.75 1041585.49 00:27:16.188 =================================================================================================================== 00:27:16.188 Total : 12135.91 47.41 7916.23 0.00 6358.27 344.75 1041585.49 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.188 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:16.188 rmmod nvme_rdma 00:27:16.453 rmmod nvme_fabrics 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3087618 ']' 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3087618 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3087618 ']' 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3087618 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3087618 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3087618' 00:27:16.453 killing process with pid 3087618 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3087618 00:27:16.453 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3087618 00:27:16.715 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:16.715 10:34:53 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:16.715 00:27:16.715 real 0m26.654s 00:27:16.715 user 1m4.486s 00:27:16.715 sys 0m6.914s 00:27:16.715 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.715 10:34:53 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:16.715 ************************************ 00:27:16.715 END TEST nvmf_bdevperf 00:27:16.715 ************************************ 00:27:16.715 10:34:53 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:27:16.715 10:34:53 nvmf_rdma -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:27:16.715 10:34:53 nvmf_rdma -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:16.715 10:34:53 nvmf_rdma -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.715 10:34:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:16.715 ************************************ 00:27:16.715 START TEST nvmf_target_disconnect 00:27:16.715 ************************************ 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:27:16.715 * Looking for test storage... 00:27:16.715 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.715 10:34:53 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:16.716 10:34:53 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:27:24.858 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:27:24.858 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:27:24.858 Found net devices under 0000:98:00.0: mlx_0_0 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.858 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:27:24.859 Found net devices under 0000:98:00.1: mlx_0_1 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:27:24.859 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:24.859 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:27:24.859 altname enp152s0f0np0 00:27:24.859 altname ens817f0np0 00:27:24.859 inet 192.168.100.8/24 scope global mlx_0_0 00:27:24.859 valid_lft forever preferred_lft forever 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:27:24.859 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:24.859 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:27:24.859 altname enp152s0f1np1 00:27:24.859 altname ens817f1np1 00:27:24.859 inet 192.168.100.9/24 scope global mlx_0_1 00:27:24.859 valid_lft forever preferred_lft forever 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:27:24.859 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:24.860 192.168.100.9' 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:24.860 192.168.100.9' 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:24.860 192.168.100.9' 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:24.860 10:35:01 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:24.860 ************************************ 00:27:24.860 START TEST nvmf_target_disconnect_tc1 00:27:24.860 ************************************ 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:27:24.860 10:35:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:25.122 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.122 [2024-07-15 10:35:02.137687] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:25.122 [2024-07-15 10:35:02.137731] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:25.122 [2024-07-15 10:35:02.137741] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:27:26.088 [2024-07-15 10:35:03.142124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:26.088 [2024-07-15 10:35:03.142175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:26.088 [2024-07-15 10:35:03.142200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:27:26.088 [2024-07-15 10:35:03.142261] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:26.088 [2024-07-15 10:35:03.142284] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:26.088 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:27:26.088 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:26.088 Initializing NVMe Controllers 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:26.088 00:27:26.088 real 0m1.140s 00:27:26.088 user 0m0.965s 00:27:26.088 sys 0m0.154s 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:26.088 ************************************ 00:27:26.088 END TEST nvmf_target_disconnect_tc1 00:27:26.088 ************************************ 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:26.088 ************************************ 00:27:26.088 START TEST nvmf_target_disconnect_tc2 00:27:26.088 ************************************ 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3094000 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3094000 00:27:26.088 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3094000 ']' 00:27:26.089 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.089 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:26.089 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.089 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:26.089 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.089 10:35:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:26.350 [2024-07-15 10:35:03.284800] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:26.350 [2024-07-15 10:35:03.284852] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.350 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.350 [2024-07-15 10:35:03.372305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:26.350 [2024-07-15 10:35:03.465695] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.350 [2024-07-15 10:35:03.465760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.350 [2024-07-15 10:35:03.465768] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.350 [2024-07-15 10:35:03.465776] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.350 [2024-07-15 10:35:03.465782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.350 [2024-07-15 10:35:03.465947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:26.350 [2024-07-15 10:35:03.466102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:26.350 [2024-07-15 10:35:03.466244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:26.350 [2024-07-15 10:35:03.466263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:26.921 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:26.921 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:26.921 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:26.921 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:26.921 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.921 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.921 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:26.921 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.921 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.182 Malloc0 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.182 [2024-07-15 10:35:04.168049] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x69b550/0x6a70b0) succeed. 00:27:27.182 [2024-07-15 10:35:04.183863] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x69cb90/0x6e8740) succeed. 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.182 [2024-07-15 10:35:04.370081] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:27.182 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.443 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:27.443 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3094348 00:27:27.443 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:27.443 10:35:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:27:27.443 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.358 10:35:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3094000 00:27:29.358 10:35:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Write completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 Read completed with error (sct=0, sc=8) 00:27:30.744 starting I/O failed 00:27:30.744 [2024-07-15 10:35:07.585986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:31.317 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3094000 Killed "${NVMF_APP[@]}" "$@" 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3095030 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3095030 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3095030 ']' 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:31.317 10:35:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.317 [2024-07-15 10:35:08.456565] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:31.317 [2024-07-15 10:35:08.456639] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.317 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.578 [2024-07-15 10:35:08.548256] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Write completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 Read completed with error (sct=0, sc=8) 00:27:31.578 starting I/O failed 00:27:31.578 [2024-07-15 10:35:08.591748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:31.578 [2024-07-15 10:35:08.594678] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:31.578 [2024-07-15 10:35:08.594700] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:31.578 [2024-07-15 10:35:08.594708] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:31.578 [2024-07-15 10:35:08.643652] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.578 [2024-07-15 10:35:08.643708] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.578 [2024-07-15 10:35:08.643716] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:31.578 [2024-07-15 10:35:08.643723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:31.578 [2024-07-15 10:35:08.643729] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.578 [2024-07-15 10:35:08.643895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:31.578 [2024-07-15 10:35:08.644040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:31.578 [2024-07-15 10:35:08.644201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:31.578 [2024-07-15 10:35:08.644202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:32.149 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:32.149 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:32.150 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:32.150 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:32.150 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.150 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:32.150 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:32.150 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.150 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.150 Malloc0 00:27:32.150 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.150 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:32.150 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.150 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.150 [2024-07-15 10:35:09.343401] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cae550/0x1cba0b0) succeed. 00:27:32.411 [2024-07-15 10:35:09.358972] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cafb90/0x1cfb740) succeed. 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.411 [2024-07-15 10:35:09.505271] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.411 10:35:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3094348 00:27:32.411 [2024-07-15 10:35:09.599163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.411 qpair failed and we were unable to recover it. 00:27:32.411 [2024-07-15 10:35:09.605531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.411 [2024-07-15 10:35:09.605588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.411 [2024-07-15 10:35:09.605605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.411 [2024-07-15 10:35:09.605613] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.411 [2024-07-15 10:35:09.605620] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.672 [2024-07-15 10:35:09.614598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-07-15 10:35:09.625379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-07-15 10:35:09.625421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-07-15 10:35:09.625435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-07-15 10:35:09.625442] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-07-15 10:35:09.625449] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.672 [2024-07-15 10:35:09.634826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-07-15 10:35:09.645435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-07-15 10:35:09.645471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-07-15 10:35:09.645485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-07-15 10:35:09.645492] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-07-15 10:35:09.645498] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.672 [2024-07-15 10:35:09.654670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-07-15 10:35:09.665096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-07-15 10:35:09.665138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-07-15 10:35:09.665152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-07-15 10:35:09.665159] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-07-15 10:35:09.665166] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.672 [2024-07-15 10:35:09.674730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-07-15 10:35:09.685057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-07-15 10:35:09.685103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-07-15 10:35:09.685116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-07-15 10:35:09.685127] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-07-15 10:35:09.685133] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.672 [2024-07-15 10:35:09.694792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-07-15 10:35:09.704826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.672 [2024-07-15 10:35:09.704865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.672 [2024-07-15 10:35:09.704879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.672 [2024-07-15 10:35:09.704887] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.672 [2024-07-15 10:35:09.704894] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.672 [2024-07-15 10:35:09.714577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.672 qpair failed and we were unable to recover it. 00:27:32.672 [2024-07-15 10:35:09.725599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.673 [2024-07-15 10:35:09.725631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.673 [2024-07-15 10:35:09.725644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.673 [2024-07-15 10:35:09.725651] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.673 [2024-07-15 10:35:09.725658] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.673 [2024-07-15 10:35:09.734902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.673 qpair failed and we were unable to recover it. 00:27:32.673 [2024-07-15 10:35:09.745401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.673 [2024-07-15 10:35:09.745440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.673 [2024-07-15 10:35:09.745454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.673 [2024-07-15 10:35:09.745461] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.673 [2024-07-15 10:35:09.745467] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.673 [2024-07-15 10:35:09.755072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.673 qpair failed and we were unable to recover it. 00:27:32.673 [2024-07-15 10:35:09.765627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.673 [2024-07-15 10:35:09.765674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.673 [2024-07-15 10:35:09.765699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.673 [2024-07-15 10:35:09.765708] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.673 [2024-07-15 10:35:09.765715] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.673 [2024-07-15 10:35:09.775155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.673 qpair failed and we were unable to recover it. 00:27:32.673 [2024-07-15 10:35:09.785689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.673 [2024-07-15 10:35:09.785722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.673 [2024-07-15 10:35:09.785736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.673 [2024-07-15 10:35:09.785744] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.673 [2024-07-15 10:35:09.785750] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.673 [2024-07-15 10:35:09.795152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.673 qpair failed and we were unable to recover it. 00:27:32.673 [2024-07-15 10:35:09.805684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.673 [2024-07-15 10:35:09.805717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.673 [2024-07-15 10:35:09.805731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.673 [2024-07-15 10:35:09.805738] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.673 [2024-07-15 10:35:09.805744] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.673 [2024-07-15 10:35:09.815091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.673 qpair failed and we were unable to recover it. 00:27:32.673 [2024-07-15 10:35:09.825724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.673 [2024-07-15 10:35:09.825761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.673 [2024-07-15 10:35:09.825775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.673 [2024-07-15 10:35:09.825782] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.673 [2024-07-15 10:35:09.825788] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.673 [2024-07-15 10:35:09.835083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.673 qpair failed and we were unable to recover it. 00:27:32.673 [2024-07-15 10:35:09.845993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.673 [2024-07-15 10:35:09.846037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.673 [2024-07-15 10:35:09.846064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.673 [2024-07-15 10:35:09.846072] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.673 [2024-07-15 10:35:09.846079] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.673 [2024-07-15 10:35:09.855427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.673 qpair failed and we were unable to recover it. 00:27:32.673 [2024-07-15 10:35:09.865989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.673 [2024-07-15 10:35:09.866022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.673 [2024-07-15 10:35:09.866040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.673 [2024-07-15 10:35:09.866047] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.673 [2024-07-15 10:35:09.866053] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.935 [2024-07-15 10:35:09.875448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.935 qpair failed and we were unable to recover it. 00:27:32.935 [2024-07-15 10:35:09.886054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.935 [2024-07-15 10:35:09.886087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.935 [2024-07-15 10:35:09.886100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:09.886107] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:09.886113] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:09.895178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:09.905730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:09.905768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:09.905781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:09.905788] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:09.905794] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:09.915312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:09.926200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:09.926243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:09.926257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:09.926264] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:09.926270] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:09.935617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:09.946056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:09.946090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:09.946103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:09.946110] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:09.946120] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:09.955393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:09.966088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:09.966126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:09.966139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:09.966146] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:09.966152] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:09.975422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:09.985912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:09.985949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:09.985961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:09.985968] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:09.985975] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:09.995639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:10.006712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:10.006755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:10.006769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:10.006777] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:10.006784] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:10.015662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:10.025549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:10.025594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:10.025607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:10.025614] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:10.025620] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:10.035622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:10.046508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:10.046549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:10.046563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:10.046570] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:10.046576] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:10.055908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:10.066097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:10.066136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:10.066149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:10.066156] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:10.066162] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:10.075792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:10.086661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:10.086699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:10.086712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:10.086719] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:10.086725] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:10.096125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:10.106637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:10.106671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:10.106684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:10.106691] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:10.106697] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:32.936 [2024-07-15 10:35:10.115914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.936 qpair failed and we were unable to recover it. 00:27:32.936 [2024-07-15 10:35:10.127058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.936 [2024-07-15 10:35:10.127095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.936 [2024-07-15 10:35:10.127108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.936 [2024-07-15 10:35:10.127118] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.936 [2024-07-15 10:35:10.127125] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.198 [2024-07-15 10:35:10.136144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.198 qpair failed and we were unable to recover it. 00:27:33.198 [2024-07-15 10:35:10.146537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.198 [2024-07-15 10:35:10.146573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.198 [2024-07-15 10:35:10.146585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.198 [2024-07-15 10:35:10.146592] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.198 [2024-07-15 10:35:10.146598] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.198 [2024-07-15 10:35:10.156196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.198 qpair failed and we were unable to recover it. 00:27:33.198 [2024-07-15 10:35:10.166837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.198 [2024-07-15 10:35:10.166874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.198 [2024-07-15 10:35:10.166887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.198 [2024-07-15 10:35:10.166893] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.198 [2024-07-15 10:35:10.166900] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.198 [2024-07-15 10:35:10.176197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.199 qpair failed and we were unable to recover it. 00:27:33.199 [2024-07-15 10:35:10.186854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.199 [2024-07-15 10:35:10.186889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.199 [2024-07-15 10:35:10.186902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.199 [2024-07-15 10:35:10.186909] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.199 [2024-07-15 10:35:10.186915] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.199 [2024-07-15 10:35:10.196474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.199 qpair failed and we were unable to recover it. 00:27:33.199 [2024-07-15 10:35:10.207061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.199 [2024-07-15 10:35:10.207092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.199 [2024-07-15 10:35:10.207105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.199 [2024-07-15 10:35:10.207112] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.199 [2024-07-15 10:35:10.207119] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.199 [2024-07-15 10:35:10.216368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.199 qpair failed and we were unable to recover it. 00:27:33.199 [2024-07-15 10:35:10.226816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.199 [2024-07-15 10:35:10.226855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.199 [2024-07-15 10:35:10.226868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.199 [2024-07-15 10:35:10.226875] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.199 [2024-07-15 10:35:10.226881] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.199 [2024-07-15 10:35:10.236502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.199 qpair failed and we were unable to recover it. 00:27:33.199 [2024-07-15 10:35:10.247146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.199 [2024-07-15 10:35:10.247183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.199 [2024-07-15 10:35:10.247193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.199 [2024-07-15 10:35:10.247198] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.199 [2024-07-15 10:35:10.247202] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.199 [2024-07-15 10:35:10.256468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.199 qpair failed and we were unable to recover it. 00:27:33.199 [2024-07-15 10:35:10.267186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.199 [2024-07-15 10:35:10.267218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.199 [2024-07-15 10:35:10.267247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.199 [2024-07-15 10:35:10.267253] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.199 [2024-07-15 10:35:10.267258] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.199 [2024-07-15 10:35:10.276443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.199 qpair failed and we were unable to recover it. 00:27:33.199 [2024-07-15 10:35:10.287021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.199 [2024-07-15 10:35:10.287051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.199 [2024-07-15 10:35:10.287062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.199 [2024-07-15 10:35:10.287067] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.199 [2024-07-15 10:35:10.287072] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.199 [2024-07-15 10:35:10.296533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.199 qpair failed and we were unable to recover it. 00:27:33.199 [2024-07-15 10:35:10.307058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.199 [2024-07-15 10:35:10.307090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.199 [2024-07-15 10:35:10.307114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.199 [2024-07-15 10:35:10.307120] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.199 [2024-07-15 10:35:10.307124] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.199 [2024-07-15 10:35:10.316518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.199 qpair failed and we were unable to recover it. 00:27:33.199 [2024-07-15 10:35:10.327354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.199 [2024-07-15 10:35:10.327386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.199 [2024-07-15 10:35:10.327397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.199 [2024-07-15 10:35:10.327402] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.199 [2024-07-15 10:35:10.327407] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.199 [2024-07-15 10:35:10.336595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.199 qpair failed and we were unable to recover it. 00:27:33.199 [2024-07-15 10:35:10.347071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.199 [2024-07-15 10:35:10.347098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.199 [2024-07-15 10:35:10.347108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.199 [2024-07-15 10:35:10.347113] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.199 [2024-07-15 10:35:10.347117] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.199 [2024-07-15 10:35:10.356456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.199 qpair failed and we were unable to recover it. 00:27:33.199 [2024-07-15 10:35:10.367381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.199 [2024-07-15 10:35:10.367410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.199 [2024-07-15 10:35:10.367430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.199 [2024-07-15 10:35:10.367436] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.199 [2024-07-15 10:35:10.367441] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.199 [2024-07-15 10:35:10.376758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.199 qpair failed and we were unable to recover it. 00:27:33.199 [2024-07-15 10:35:10.387064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.199 [2024-07-15 10:35:10.387092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.199 [2024-07-15 10:35:10.387102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.199 [2024-07-15 10:35:10.387107] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.199 [2024-07-15 10:35:10.387115] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.461 [2024-07-15 10:35:10.396939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.407615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.407651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.407672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.407677] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.407682] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.416909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.427611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.427647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.427658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.427663] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.427667] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.437153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.447649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.447678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.447698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.447704] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.447709] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.456942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.467249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.467275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.467285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.467290] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.467294] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.477114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.487677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.487712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.487722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.487726] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.487731] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.497022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.507850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.507878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.507898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.507904] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.507909] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.517297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.527814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.527839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.527850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.527854] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.527859] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.537013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.547577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.547606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.547616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.547621] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.547625] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.557341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.567919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.567957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.567967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.567974] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.567978] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.577250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.587943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.587969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.587978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.587983] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.587987] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.597346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.607881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.607913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.607933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.607939] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.607943] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.617452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.627758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.627786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.627797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.627802] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.627806] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.462 [2024-07-15 10:35:10.637404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.462 qpair failed and we were unable to recover it. 00:27:33.462 [2024-07-15 10:35:10.648264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.462 [2024-07-15 10:35:10.648307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.462 [2024-07-15 10:35:10.648328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.462 [2024-07-15 10:35:10.648333] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.462 [2024-07-15 10:35:10.648338] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.725 [2024-07-15 10:35:10.657719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.725 qpair failed and we were unable to recover it. 00:27:33.725 [2024-07-15 10:35:10.668147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.725 [2024-07-15 10:35:10.668176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.725 [2024-07-15 10:35:10.668187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.725 [2024-07-15 10:35:10.668192] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.725 [2024-07-15 10:35:10.668196] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.725 [2024-07-15 10:35:10.677819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.725 qpair failed and we were unable to recover it. 00:27:33.725 [2024-07-15 10:35:10.688414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.725 [2024-07-15 10:35:10.688444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.725 [2024-07-15 10:35:10.688454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.725 [2024-07-15 10:35:10.688459] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.725 [2024-07-15 10:35:10.688463] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.725 [2024-07-15 10:35:10.697722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.725 qpair failed and we were unable to recover it. 00:27:33.725 [2024-07-15 10:35:10.708000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.725 [2024-07-15 10:35:10.708026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.725 [2024-07-15 10:35:10.708037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.725 [2024-07-15 10:35:10.708041] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.725 [2024-07-15 10:35:10.708046] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.725 [2024-07-15 10:35:10.717647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.725 qpair failed and we were unable to recover it. 00:27:33.725 [2024-07-15 10:35:10.728247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.725 [2024-07-15 10:35:10.728277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.725 [2024-07-15 10:35:10.728287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.725 [2024-07-15 10:35:10.728292] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.725 [2024-07-15 10:35:10.728296] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.725 [2024-07-15 10:35:10.737816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.725 qpair failed and we were unable to recover it. 00:27:33.725 [2024-07-15 10:35:10.748586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.725 [2024-07-15 10:35:10.748616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.725 [2024-07-15 10:35:10.748628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.725 [2024-07-15 10:35:10.748632] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.725 [2024-07-15 10:35:10.748636] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.725 [2024-07-15 10:35:10.757835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.725 qpair failed and we were unable to recover it. 00:27:33.725 [2024-07-15 10:35:10.768322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.725 [2024-07-15 10:35:10.768350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.725 [2024-07-15 10:35:10.768360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.725 [2024-07-15 10:35:10.768365] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.725 [2024-07-15 10:35:10.768369] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.725 [2024-07-15 10:35:10.777733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.725 qpair failed and we were unable to recover it. 00:27:33.725 [2024-07-15 10:35:10.788258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.725 [2024-07-15 10:35:10.788285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.725 [2024-07-15 10:35:10.788295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.725 [2024-07-15 10:35:10.788300] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.725 [2024-07-15 10:35:10.788305] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.725 [2024-07-15 10:35:10.797832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.725 qpair failed and we were unable to recover it. 00:27:33.725 [2024-07-15 10:35:10.808501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.725 [2024-07-15 10:35:10.808533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.725 [2024-07-15 10:35:10.808542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.725 [2024-07-15 10:35:10.808547] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.725 [2024-07-15 10:35:10.808551] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.725 [2024-07-15 10:35:10.817659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.725 qpair failed and we were unable to recover it. 00:27:33.725 [2024-07-15 10:35:10.828535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.725 [2024-07-15 10:35:10.828563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.725 [2024-07-15 10:35:10.828573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.725 [2024-07-15 10:35:10.828578] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.725 [2024-07-15 10:35:10.828585] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.725 [2024-07-15 10:35:10.838504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.725 qpair failed and we were unable to recover it. 00:27:33.725 [2024-07-15 10:35:10.848588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.725 [2024-07-15 10:35:10.848619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.725 [2024-07-15 10:35:10.848629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.725 [2024-07-15 10:35:10.848633] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.726 [2024-07-15 10:35:10.848637] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.726 [2024-07-15 10:35:10.858319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.726 qpair failed and we were unable to recover it. 00:27:33.726 [2024-07-15 10:35:10.868310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.726 [2024-07-15 10:35:10.868338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.726 [2024-07-15 10:35:10.868347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.726 [2024-07-15 10:35:10.868352] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.726 [2024-07-15 10:35:10.868356] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.726 [2024-07-15 10:35:10.878283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.726 qpair failed and we were unable to recover it. 00:27:33.726 [2024-07-15 10:35:10.888687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.726 [2024-07-15 10:35:10.888726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.726 [2024-07-15 10:35:10.888735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.726 [2024-07-15 10:35:10.888740] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.726 [2024-07-15 10:35:10.888744] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.726 [2024-07-15 10:35:10.898179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.726 qpair failed and we were unable to recover it. 00:27:33.726 [2024-07-15 10:35:10.908438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.726 [2024-07-15 10:35:10.908469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.726 [2024-07-15 10:35:10.908478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.726 [2024-07-15 10:35:10.908483] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.726 [2024-07-15 10:35:10.908488] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.726 [2024-07-15 10:35:10.918572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.726 qpair failed and we were unable to recover it. 00:27:33.988 [2024-07-15 10:35:10.928808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.988 [2024-07-15 10:35:10.928841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.988 [2024-07-15 10:35:10.928851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.988 [2024-07-15 10:35:10.928855] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.988 [2024-07-15 10:35:10.928860] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.988 [2024-07-15 10:35:10.938360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.988 qpair failed and we were unable to recover it. 00:27:33.988 [2024-07-15 10:35:10.948615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.988 [2024-07-15 10:35:10.948644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.988 [2024-07-15 10:35:10.948653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.988 [2024-07-15 10:35:10.948658] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.988 [2024-07-15 10:35:10.948662] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.988 [2024-07-15 10:35:10.958444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.988 qpair failed and we were unable to recover it. 00:27:33.988 [2024-07-15 10:35:10.968799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.988 [2024-07-15 10:35:10.968827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.988 [2024-07-15 10:35:10.968836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.989 [2024-07-15 10:35:10.968841] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.989 [2024-07-15 10:35:10.968845] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.989 [2024-07-15 10:35:10.978428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.989 qpair failed and we were unable to recover it. 00:27:33.989 [2024-07-15 10:35:10.989138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.989 [2024-07-15 10:35:10.989165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.989 [2024-07-15 10:35:10.989175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.989 [2024-07-15 10:35:10.989180] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.989 [2024-07-15 10:35:10.989184] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.989 [2024-07-15 10:35:10.998160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.989 qpair failed and we were unable to recover it. 00:27:33.989 [2024-07-15 10:35:11.009048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.989 [2024-07-15 10:35:11.009079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.989 [2024-07-15 10:35:11.009088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.989 [2024-07-15 10:35:11.009096] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.989 [2024-07-15 10:35:11.009100] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.989 [2024-07-15 10:35:11.018042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.989 qpair failed and we were unable to recover it. 00:27:33.989 [2024-07-15 10:35:11.028108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.989 [2024-07-15 10:35:11.028137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.989 [2024-07-15 10:35:11.028146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.989 [2024-07-15 10:35:11.028151] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.989 [2024-07-15 10:35:11.028155] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.989 [2024-07-15 10:35:11.038829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.989 qpair failed and we were unable to recover it. 00:27:33.989 [2024-07-15 10:35:11.049188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.989 [2024-07-15 10:35:11.049223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.989 [2024-07-15 10:35:11.049236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.989 [2024-07-15 10:35:11.049241] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.989 [2024-07-15 10:35:11.049246] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.989 [2024-07-15 10:35:11.058594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.989 qpair failed and we were unable to recover it. 00:27:33.989 [2024-07-15 10:35:11.069490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.989 [2024-07-15 10:35:11.069523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.989 [2024-07-15 10:35:11.069533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.989 [2024-07-15 10:35:11.069537] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.989 [2024-07-15 10:35:11.069542] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.989 [2024-07-15 10:35:11.078601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.989 qpair failed and we were unable to recover it. 00:27:33.989 [2024-07-15 10:35:11.089364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.989 [2024-07-15 10:35:11.089395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.989 [2024-07-15 10:35:11.089404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.989 [2024-07-15 10:35:11.089409] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.989 [2024-07-15 10:35:11.089413] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.989 [2024-07-15 10:35:11.098703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.989 qpair failed and we were unable to recover it. 00:27:33.989 [2024-07-15 10:35:11.109226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.989 [2024-07-15 10:35:11.109255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.989 [2024-07-15 10:35:11.109264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.989 [2024-07-15 10:35:11.109269] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.989 [2024-07-15 10:35:11.109273] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.989 [2024-07-15 10:35:11.118940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.989 qpair failed and we were unable to recover it. 00:27:33.989 [2024-07-15 10:35:11.129451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.989 [2024-07-15 10:35:11.129485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.989 [2024-07-15 10:35:11.129495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.989 [2024-07-15 10:35:11.129500] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.989 [2024-07-15 10:35:11.129504] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.989 [2024-07-15 10:35:11.138778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.989 qpair failed and we were unable to recover it. 00:27:33.989 [2024-07-15 10:35:11.149589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.989 [2024-07-15 10:35:11.149618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.989 [2024-07-15 10:35:11.149628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.989 [2024-07-15 10:35:11.149632] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.989 [2024-07-15 10:35:11.149637] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.989 [2024-07-15 10:35:11.158877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.989 qpair failed and we were unable to recover it. 00:27:33.989 [2024-07-15 10:35:11.169087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.989 [2024-07-15 10:35:11.169120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.989 [2024-07-15 10:35:11.169129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.989 [2024-07-15 10:35:11.169134] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.989 [2024-07-15 10:35:11.169138] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:33.989 [2024-07-15 10:35:11.179061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:33.989 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.189465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.189493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.189505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.189510] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.189514] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.251 [2024-07-15 10:35:11.199106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.251 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.209759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.209797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.209806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.209811] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.209815] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.251 [2024-07-15 10:35:11.219203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.251 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.229043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.229072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.229082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.229087] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.229091] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.251 [2024-07-15 10:35:11.239069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.251 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.249898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.249929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.249938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.249943] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.249947] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.251 [2024-07-15 10:35:11.259279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.251 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.269046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.269074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.269083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.269088] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.269100] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.251 [2024-07-15 10:35:11.279121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.251 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.289977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.290010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.290020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.290024] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.290029] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.251 [2024-07-15 10:35:11.299272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.251 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.309921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.309948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.309957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.309962] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.309966] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.251 [2024-07-15 10:35:11.319660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.251 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.330084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.330111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.330120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.330125] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.330129] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.251 [2024-07-15 10:35:11.339515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.251 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.349057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.349083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.349092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.349097] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.349101] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.251 [2024-07-15 10:35:11.359446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.251 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.370125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.370160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.370170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.370175] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.370179] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.251 [2024-07-15 10:35:11.379568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.251 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.389735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.389760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.389769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.389773] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.389778] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.251 [2024-07-15 10:35:11.399177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.251 qpair failed and we were unable to recover it. 00:27:34.251 [2024-07-15 10:35:11.409586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.251 [2024-07-15 10:35:11.409615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.251 [2024-07-15 10:35:11.409625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.251 [2024-07-15 10:35:11.409629] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.251 [2024-07-15 10:35:11.409634] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.252 [2024-07-15 10:35:11.419531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.252 qpair failed and we were unable to recover it. 00:27:34.252 [2024-07-15 10:35:11.430050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.252 [2024-07-15 10:35:11.430077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.252 [2024-07-15 10:35:11.430087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.252 [2024-07-15 10:35:11.430092] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.252 [2024-07-15 10:35:11.430096] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.252 [2024-07-15 10:35:11.439682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.252 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.450476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.450505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.450514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.450523] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.450527] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.459811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.470501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.470532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.470541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.470546] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.470550] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.480335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.490634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.490664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.490674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.490678] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.490683] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.499760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.510248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.510276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.510285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.510290] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.510294] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.520079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.530665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.530692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.530701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.530706] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.530710] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.540151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.550825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.550854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.550874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.550880] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.550884] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.559931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.570733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.570759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.570769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.570774] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.570778] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.580166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.592272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.592300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.592310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.592315] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.592319] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.600130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.610826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.610857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.610867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.610872] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.610877] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.620274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.630766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.630797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.630810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.630815] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.630819] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.640592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.651158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.651191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.651200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.651205] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.651211] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.660072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.670587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.670612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.670621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.670626] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.670630] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.680608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.514 [2024-07-15 10:35:11.690963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.514 [2024-07-15 10:35:11.690991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.514 [2024-07-15 10:35:11.691000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.514 [2024-07-15 10:35:11.691005] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.514 [2024-07-15 10:35:11.691009] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.514 [2024-07-15 10:35:11.700370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.514 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.711024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.711050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.711060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.711065] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.711072] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.720521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.731135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.731164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.731174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.731178] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.731183] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.740787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.750880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.750908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.750917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.750922] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.750926] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.760572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.771483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.771513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.771523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.771528] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.771532] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.780588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.791581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.791612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.791621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.791626] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.791630] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.800927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.811233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.811268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.811277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.811282] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.811286] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.820905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.830227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.830260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.830270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.830274] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.830279] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.840897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.851196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.851224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.851236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.851241] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.851245] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.861068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.871556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.871583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.871592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.871597] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.871601] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.880884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.891665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.891694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.891704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.891712] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.891717] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.901170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.911272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.911299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.911308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.911313] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.911317] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.921116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.931854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.931881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.931891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.931895] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.931899] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.941134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.951645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.951676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.951685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.951690] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.777 [2024-07-15 10:35:11.951694] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:34.777 [2024-07-15 10:35:11.961273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.777 qpair failed and we were unable to recover it. 00:27:34.777 [2024-07-15 10:35:11.971747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.777 [2024-07-15 10:35:11.971772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.777 [2024-07-15 10:35:11.971782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.777 [2024-07-15 10:35:11.971786] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.778 [2024-07-15 10:35:11.971791] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.040 [2024-07-15 10:35:11.981061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-15 10:35:11.991571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.040 [2024-07-15 10:35:11.991599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.040 [2024-07-15 10:35:11.991608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.040 [2024-07-15 10:35:11.991613] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.040 [2024-07-15 10:35:11.991617] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.040 [2024-07-15 10:35:12.001460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-15 10:35:12.011931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.040 [2024-07-15 10:35:12.011956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.040 [2024-07-15 10:35:12.011966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.040 [2024-07-15 10:35:12.011970] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.040 [2024-07-15 10:35:12.011975] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.040 [2024-07-15 10:35:12.021480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-15 10:35:12.032172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.040 [2024-07-15 10:35:12.032198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.040 [2024-07-15 10:35:12.032208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.040 [2024-07-15 10:35:12.032213] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.040 [2024-07-15 10:35:12.032217] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.040 [2024-07-15 10:35:12.041323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.040 qpair failed and we were unable to recover it. 00:27:35.040 [2024-07-15 10:35:12.052258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.041 [2024-07-15 10:35:12.052283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.041 [2024-07-15 10:35:12.052293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.041 [2024-07-15 10:35:12.052297] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.041 [2024-07-15 10:35:12.052302] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.041 [2024-07-15 10:35:12.061424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-15 10:35:12.071729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.041 [2024-07-15 10:35:12.071754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.041 [2024-07-15 10:35:12.071766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.041 [2024-07-15 10:35:12.071770] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.041 [2024-07-15 10:35:12.071775] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.041 [2024-07-15 10:35:12.081611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-15 10:35:12.092207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.041 [2024-07-15 10:35:12.092240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.041 [2024-07-15 10:35:12.092250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.041 [2024-07-15 10:35:12.092255] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.041 [2024-07-15 10:35:12.092259] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.041 [2024-07-15 10:35:12.101433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-15 10:35:12.112172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.041 [2024-07-15 10:35:12.112203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.041 [2024-07-15 10:35:12.112212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.041 [2024-07-15 10:35:12.112217] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.041 [2024-07-15 10:35:12.112221] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.041 [2024-07-15 10:35:12.121971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-15 10:35:12.132464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.041 [2024-07-15 10:35:12.132498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.041 [2024-07-15 10:35:12.132508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.041 [2024-07-15 10:35:12.132512] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.041 [2024-07-15 10:35:12.132517] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.041 [2024-07-15 10:35:12.141731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-15 10:35:12.151992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.041 [2024-07-15 10:35:12.152018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.041 [2024-07-15 10:35:12.152027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.041 [2024-07-15 10:35:12.152032] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.041 [2024-07-15 10:35:12.152040] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.041 [2024-07-15 10:35:12.161705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-15 10:35:12.172437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.041 [2024-07-15 10:35:12.172469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.041 [2024-07-15 10:35:12.172479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.041 [2024-07-15 10:35:12.172484] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.041 [2024-07-15 10:35:12.172488] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.041 [2024-07-15 10:35:12.181702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-15 10:35:12.192369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.041 [2024-07-15 10:35:12.192400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.041 [2024-07-15 10:35:12.192409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.041 [2024-07-15 10:35:12.192414] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.041 [2024-07-15 10:35:12.192418] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.041 [2024-07-15 10:35:12.201814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-15 10:35:12.212503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.041 [2024-07-15 10:35:12.212531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.041 [2024-07-15 10:35:12.212541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.041 [2024-07-15 10:35:12.212545] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.041 [2024-07-15 10:35:12.212550] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.041 [2024-07-15 10:35:12.221986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.041 qpair failed and we were unable to recover it. 00:27:35.041 [2024-07-15 10:35:12.232000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.041 [2024-07-15 10:35:12.232031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.041 [2024-07-15 10:35:12.232040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.041 [2024-07-15 10:35:12.232045] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.041 [2024-07-15 10:35:12.232049] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.304 [2024-07-15 10:35:12.242030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.304 qpair failed and we were unable to recover it. 00:27:35.304 [2024-07-15 10:35:12.252609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.304 [2024-07-15 10:35:12.252642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.304 [2024-07-15 10:35:12.252651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.304 [2024-07-15 10:35:12.252656] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.304 [2024-07-15 10:35:12.252660] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.304 [2024-07-15 10:35:12.262081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.304 qpair failed and we were unable to recover it. 00:27:35.304 [2024-07-15 10:35:12.272463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.304 [2024-07-15 10:35:12.272492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.304 [2024-07-15 10:35:12.272501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.304 [2024-07-15 10:35:12.272506] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.304 [2024-07-15 10:35:12.272510] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.304 [2024-07-15 10:35:12.282131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.304 qpair failed and we were unable to recover it. 00:27:35.304 [2024-07-15 10:35:12.292745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.304 [2024-07-15 10:35:12.292772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.304 [2024-07-15 10:35:12.292781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.304 [2024-07-15 10:35:12.292786] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.305 [2024-07-15 10:35:12.292790] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.305 [2024-07-15 10:35:12.302117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.305 qpair failed and we were unable to recover it. 00:27:35.305 [2024-07-15 10:35:12.312254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.305 [2024-07-15 10:35:12.312284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.305 [2024-07-15 10:35:12.312293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.305 [2024-07-15 10:35:12.312298] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.305 [2024-07-15 10:35:12.312302] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.305 [2024-07-15 10:35:12.322109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.305 qpair failed and we were unable to recover it. 00:27:35.305 [2024-07-15 10:35:12.332698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.305 [2024-07-15 10:35:12.332730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.305 [2024-07-15 10:35:12.332740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.305 [2024-07-15 10:35:12.332751] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.305 [2024-07-15 10:35:12.332755] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.305 [2024-07-15 10:35:12.342169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.305 qpair failed and we were unable to recover it. 00:27:35.305 [2024-07-15 10:35:12.352582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.305 [2024-07-15 10:35:12.352611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.305 [2024-07-15 10:35:12.352620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.305 [2024-07-15 10:35:12.352625] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.305 [2024-07-15 10:35:12.352629] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.305 [2024-07-15 10:35:12.362345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.305 qpair failed and we were unable to recover it. 00:27:35.305 [2024-07-15 10:35:12.372889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.305 [2024-07-15 10:35:12.372914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.305 [2024-07-15 10:35:12.372923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.305 [2024-07-15 10:35:12.372928] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.305 [2024-07-15 10:35:12.372932] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.305 [2024-07-15 10:35:12.382597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.305 qpair failed and we were unable to recover it. 00:27:35.305 [2024-07-15 10:35:12.392566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.305 [2024-07-15 10:35:12.392592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.305 [2024-07-15 10:35:12.392600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.305 [2024-07-15 10:35:12.392605] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.305 [2024-07-15 10:35:12.392609] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.305 [2024-07-15 10:35:12.402182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.305 qpair failed and we were unable to recover it. 00:27:35.305 [2024-07-15 10:35:12.413007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.305 [2024-07-15 10:35:12.413042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.305 [2024-07-15 10:35:12.413052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.305 [2024-07-15 10:35:12.413057] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.305 [2024-07-15 10:35:12.413061] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.305 [2024-07-15 10:35:12.422408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.305 qpair failed and we were unable to recover it. 00:27:35.305 [2024-07-15 10:35:12.432883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.305 [2024-07-15 10:35:12.432909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.305 [2024-07-15 10:35:12.432919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.305 [2024-07-15 10:35:12.432924] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.305 [2024-07-15 10:35:12.432929] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.305 [2024-07-15 10:35:12.442556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.305 qpair failed and we were unable to recover it. 00:27:35.305 [2024-07-15 10:35:12.452941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.305 [2024-07-15 10:35:12.452968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.305 [2024-07-15 10:35:12.452977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.305 [2024-07-15 10:35:12.452982] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.305 [2024-07-15 10:35:12.452986] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.305 [2024-07-15 10:35:12.462482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.305 qpair failed and we were unable to recover it. 00:27:35.305 [2024-07-15 10:35:12.473021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.305 [2024-07-15 10:35:12.473046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.305 [2024-07-15 10:35:12.473056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.305 [2024-07-15 10:35:12.473061] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.305 [2024-07-15 10:35:12.473065] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.305 [2024-07-15 10:35:12.482721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.305 qpair failed and we were unable to recover it. 00:27:35.305 [2024-07-15 10:35:12.493077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.305 [2024-07-15 10:35:12.493113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.305 [2024-07-15 10:35:12.493123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.305 [2024-07-15 10:35:12.493128] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.305 [2024-07-15 10:35:12.493132] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.502965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.512965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.512996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.513008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.513013] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.513017] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.522549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.532756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.532789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.532799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.532803] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.532808] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.542717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.553155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.553181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.553191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.553196] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.553200] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.562555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.573632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.573670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.573679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.573683] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.573688] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.582891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.593549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.593580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.593590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.593595] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.593601] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.603095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.613639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.613669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.613679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.613684] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.613688] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.622844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.633261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.633288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.633297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.633302] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.633306] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.643268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.653815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.653842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.653851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.653857] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.653861] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.663076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.673900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.673927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.673937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.673942] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.673947] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.683087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.694069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.694103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.694113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.694117] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.694122] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.703011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.712931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.712958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.712968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.712972] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.712977] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.568 [2024-07-15 10:35:12.723077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.568 qpair failed and we were unable to recover it. 00:27:35.568 [2024-07-15 10:35:12.733283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.568 [2024-07-15 10:35:12.733316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.568 [2024-07-15 10:35:12.733326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.568 [2024-07-15 10:35:12.733330] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.568 [2024-07-15 10:35:12.733334] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.569 [2024-07-15 10:35:12.743217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.569 qpair failed and we were unable to recover it. 00:27:35.569 [2024-07-15 10:35:12.753855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.569 [2024-07-15 10:35:12.753882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.569 [2024-07-15 10:35:12.753892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.569 [2024-07-15 10:35:12.753897] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.569 [2024-07-15 10:35:12.753901] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.831 [2024-07-15 10:35:12.763744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.831 qpair failed and we were unable to recover it. 00:27:35.831 [2024-07-15 10:35:12.773942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.831 [2024-07-15 10:35:12.773972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.831 [2024-07-15 10:35:12.773982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.831 [2024-07-15 10:35:12.773989] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.831 [2024-07-15 10:35:12.773994] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.831 [2024-07-15 10:35:12.783319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.831 qpair failed and we were unable to recover it. 00:27:35.831 [2024-07-15 10:35:12.793904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.831 [2024-07-15 10:35:12.793932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.831 [2024-07-15 10:35:12.793941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.831 [2024-07-15 10:35:12.793946] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.831 [2024-07-15 10:35:12.793950] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.831 [2024-07-15 10:35:12.803506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.831 qpair failed and we were unable to recover it. 00:27:35.831 [2024-07-15 10:35:12.813221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.831 [2024-07-15 10:35:12.813254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.831 [2024-07-15 10:35:12.813263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.831 [2024-07-15 10:35:12.813268] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.831 [2024-07-15 10:35:12.813272] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.831 [2024-07-15 10:35:12.823001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.831 qpair failed and we were unable to recover it. 00:27:35.831 [2024-07-15 10:35:12.833639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.831 [2024-07-15 10:35:12.833669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.831 [2024-07-15 10:35:12.833679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.831 [2024-07-15 10:35:12.833684] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.831 [2024-07-15 10:35:12.833688] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.831 [2024-07-15 10:35:12.843596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.831 qpair failed and we were unable to recover it. 00:27:35.831 [2024-07-15 10:35:12.853628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.831 [2024-07-15 10:35:12.853665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.831 [2024-07-15 10:35:12.853674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.831 [2024-07-15 10:35:12.853679] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.831 [2024-07-15 10:35:12.853683] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.831 [2024-07-15 10:35:12.863490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.831 qpair failed and we were unable to recover it. 00:27:35.831 [2024-07-15 10:35:12.873702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.831 [2024-07-15 10:35:12.873730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.831 [2024-07-15 10:35:12.873739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.831 [2024-07-15 10:35:12.873744] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.831 [2024-07-15 10:35:12.873748] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.831 [2024-07-15 10:35:12.883698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.831 qpair failed and we were unable to recover it. 00:27:35.831 [2024-07-15 10:35:12.893742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.831 [2024-07-15 10:35:12.893774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.831 [2024-07-15 10:35:12.893782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.831 [2024-07-15 10:35:12.893787] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.831 [2024-07-15 10:35:12.893792] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.831 [2024-07-15 10:35:12.903613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.831 qpair failed and we were unable to recover it. 00:27:35.831 [2024-07-15 10:35:12.913932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.831 [2024-07-15 10:35:12.913966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.831 [2024-07-15 10:35:12.913975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.831 [2024-07-15 10:35:12.913980] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.831 [2024-07-15 10:35:12.913984] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.831 [2024-07-15 10:35:12.923746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.831 qpair failed and we were unable to recover it. 00:27:35.831 [2024-07-15 10:35:12.934160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.832 [2024-07-15 10:35:12.934191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.832 [2024-07-15 10:35:12.934201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.832 [2024-07-15 10:35:12.934206] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.832 [2024-07-15 10:35:12.934210] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.832 [2024-07-15 10:35:12.943607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.832 qpair failed and we were unable to recover it. 00:27:35.832 [2024-07-15 10:35:12.953684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.832 [2024-07-15 10:35:12.953711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.832 [2024-07-15 10:35:12.953722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.832 [2024-07-15 10:35:12.953727] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.832 [2024-07-15 10:35:12.953732] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.832 [2024-07-15 10:35:12.963916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.832 qpair failed and we were unable to recover it. 00:27:35.832 [2024-07-15 10:35:12.974307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.832 [2024-07-15 10:35:12.974334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.832 [2024-07-15 10:35:12.974343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.832 [2024-07-15 10:35:12.974348] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.832 [2024-07-15 10:35:12.974352] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.832 [2024-07-15 10:35:12.983891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.832 qpair failed and we were unable to recover it. 00:27:35.832 [2024-07-15 10:35:12.994245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.832 [2024-07-15 10:35:12.994273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.832 [2024-07-15 10:35:12.994283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.832 [2024-07-15 10:35:12.994289] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.832 [2024-07-15 10:35:12.994294] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.832 [2024-07-15 10:35:13.003855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.832 qpair failed and we were unable to recover it. 00:27:35.832 [2024-07-15 10:35:13.014335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:35.832 [2024-07-15 10:35:13.014366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:35.832 [2024-07-15 10:35:13.014376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:35.832 [2024-07-15 10:35:13.014380] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:35.832 [2024-07-15 10:35:13.014385] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:35.832 [2024-07-15 10:35:13.024288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:35.832 qpair failed and we were unable to recover it. 00:27:36.094 [2024-07-15 10:35:13.033977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.094 [2024-07-15 10:35:13.034005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.094 [2024-07-15 10:35:13.034015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.094 [2024-07-15 10:35:13.034019] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.094 [2024-07-15 10:35:13.034026] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.094 [2024-07-15 10:35:13.044339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.094 qpair failed and we were unable to recover it. 00:27:36.094 [2024-07-15 10:35:13.054669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.094 [2024-07-15 10:35:13.054703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.094 [2024-07-15 10:35:13.054712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.094 [2024-07-15 10:35:13.054717] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.094 [2024-07-15 10:35:13.054721] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.094 [2024-07-15 10:35:13.064288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.094 qpair failed and we were unable to recover it. 00:27:36.094 [2024-07-15 10:35:13.074649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.094 [2024-07-15 10:35:13.074674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.094 [2024-07-15 10:35:13.074683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.094 [2024-07-15 10:35:13.074688] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.094 [2024-07-15 10:35:13.074692] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.094 [2024-07-15 10:35:13.084211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.094 qpair failed and we were unable to recover it. 00:27:36.094 [2024-07-15 10:35:13.094632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.094 [2024-07-15 10:35:13.094669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.094 [2024-07-15 10:35:13.094678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.094 [2024-07-15 10:35:13.094683] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.094 [2024-07-15 10:35:13.094687] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.094 [2024-07-15 10:35:13.104285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.094 qpair failed and we were unable to recover it. 00:27:36.094 [2024-07-15 10:35:13.114273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.094 [2024-07-15 10:35:13.114299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.094 [2024-07-15 10:35:13.114309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.094 [2024-07-15 10:35:13.114314] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.094 [2024-07-15 10:35:13.114318] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.094 [2024-07-15 10:35:13.123996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.094 qpair failed and we were unable to recover it. 00:27:36.094 [2024-07-15 10:35:13.134479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.094 [2024-07-15 10:35:13.134513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.094 [2024-07-15 10:35:13.134523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.094 [2024-07-15 10:35:13.134528] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.094 [2024-07-15 10:35:13.134532] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.094 [2024-07-15 10:35:13.144449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.094 qpair failed and we were unable to recover it. 00:27:36.094 [2024-07-15 10:35:13.155361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.094 [2024-07-15 10:35:13.155389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.094 [2024-07-15 10:35:13.155399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.094 [2024-07-15 10:35:13.155403] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.094 [2024-07-15 10:35:13.155408] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.094 [2024-07-15 10:35:13.164650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.094 qpair failed and we were unable to recover it. 00:27:36.094 [2024-07-15 10:35:13.175163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.094 [2024-07-15 10:35:13.175190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.094 [2024-07-15 10:35:13.175199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.094 [2024-07-15 10:35:13.175204] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.094 [2024-07-15 10:35:13.175208] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.094 [2024-07-15 10:35:13.184561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.094 qpair failed and we were unable to recover it. 00:27:36.094 [2024-07-15 10:35:13.194869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.094 [2024-07-15 10:35:13.194897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.094 [2024-07-15 10:35:13.194906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.094 [2024-07-15 10:35:13.194911] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.094 [2024-07-15 10:35:13.194915] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.094 [2024-07-15 10:35:13.204453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.094 qpair failed and we were unable to recover it. 00:27:36.094 [2024-07-15 10:35:13.215144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.094 [2024-07-15 10:35:13.215175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.094 [2024-07-15 10:35:13.215185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.094 [2024-07-15 10:35:13.215192] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.094 [2024-07-15 10:35:13.215196] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.094 [2024-07-15 10:35:13.224609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.094 qpair failed and we were unable to recover it. 00:27:36.094 [2024-07-15 10:35:13.235250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.094 [2024-07-15 10:35:13.235278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.094 [2024-07-15 10:35:13.235287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.094 [2024-07-15 10:35:13.235292] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.094 [2024-07-15 10:35:13.235296] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.095 [2024-07-15 10:35:13.244763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.095 qpair failed and we were unable to recover it. 00:27:36.095 [2024-07-15 10:35:13.255449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.095 [2024-07-15 10:35:13.255473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.095 [2024-07-15 10:35:13.255482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.095 [2024-07-15 10:35:13.255487] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.095 [2024-07-15 10:35:13.255491] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.095 [2024-07-15 10:35:13.264728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.095 qpair failed and we were unable to recover it. 00:27:36.095 [2024-07-15 10:35:13.275144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.095 [2024-07-15 10:35:13.275171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.095 [2024-07-15 10:35:13.275180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.095 [2024-07-15 10:35:13.275185] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.095 [2024-07-15 10:35:13.275189] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.095 [2024-07-15 10:35:13.284887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.095 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.295215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.295246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.295255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.295260] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.357 [2024-07-15 10:35:13.295264] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.357 [2024-07-15 10:35:13.304963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.357 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.315440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.315469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.315479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.315484] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.357 [2024-07-15 10:35:13.315488] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.357 [2024-07-15 10:35:13.324852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.357 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.335191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.335220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.335232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.335237] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.357 [2024-07-15 10:35:13.335242] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.357 [2024-07-15 10:35:13.345007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.357 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.355145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.355171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.355180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.355184] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.357 [2024-07-15 10:35:13.355189] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.357 [2024-07-15 10:35:13.364947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.357 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.375697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.375726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.375735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.375740] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.357 [2024-07-15 10:35:13.375744] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.357 [2024-07-15 10:35:13.384927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.357 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.395732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.395761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.395773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.395777] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.357 [2024-07-15 10:35:13.395782] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.357 [2024-07-15 10:35:13.405529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.357 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.415686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.415715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.415724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.415729] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.357 [2024-07-15 10:35:13.415733] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.357 [2024-07-15 10:35:13.425256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.357 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.435493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.435521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.435531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.435535] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.357 [2024-07-15 10:35:13.435540] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.357 [2024-07-15 10:35:13.445318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.357 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.455953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.455982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.455990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.455995] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.357 [2024-07-15 10:35:13.455999] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.357 [2024-07-15 10:35:13.465314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.357 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.476005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.476034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.476043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.476048] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.357 [2024-07-15 10:35:13.476055] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.357 [2024-07-15 10:35:13.485422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.357 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.496151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.496183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.496192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.496197] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.357 [2024-07-15 10:35:13.496201] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.357 [2024-07-15 10:35:13.505515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.357 qpair failed and we were unable to recover it. 00:27:36.357 [2024-07-15 10:35:13.515747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.357 [2024-07-15 10:35:13.515773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.357 [2024-07-15 10:35:13.515783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.357 [2024-07-15 10:35:13.515787] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.358 [2024-07-15 10:35:13.515791] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.358 [2024-07-15 10:35:13.525776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.358 qpair failed and we were unable to recover it. 00:27:36.358 [2024-07-15 10:35:13.536009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.358 [2024-07-15 10:35:13.536041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.358 [2024-07-15 10:35:13.536050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.358 [2024-07-15 10:35:13.536055] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.358 [2024-07-15 10:35:13.536059] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.358 [2024-07-15 10:35:13.545487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.358 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.556611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.556639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.556659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.556665] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.556670] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.565578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.576538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.576573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.576593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.576599] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.576604] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.585771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.596002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.596029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.596040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.596045] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.596049] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.605572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.616052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.616082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.616092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.616097] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.616101] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.625931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.636501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.636526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.636536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.636541] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.636545] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.645970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.656515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.656542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.656551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.656560] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.656564] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.665834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.676286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.676315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.676324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.676329] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.676333] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.685954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.696467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.696496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.696506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.696510] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.696515] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.706077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.716872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.716900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.716910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.716915] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.716919] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.726190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.736743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.736776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.736786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.736791] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.736795] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.746208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.756449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.756476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.756486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.756490] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.756495] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.766326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.776756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.776787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.776796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.776801] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.776805] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.786177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.620 [2024-07-15 10:35:13.796995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.620 [2024-07-15 10:35:13.797024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.620 [2024-07-15 10:35:13.797033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.620 [2024-07-15 10:35:13.797038] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.620 [2024-07-15 10:35:13.797042] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.620 [2024-07-15 10:35:13.806273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.620 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:13.817213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:13.817240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:13.817249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:13.817254] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:13.817258] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:13.826416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:13.836722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:13.836750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:13.836763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:13.836767] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:13.836772] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:13.846619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:13.857358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:13.857384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:13.857393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:13.857398] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:13.857402] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:13.866456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:13.877294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:13.877323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:13.877332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:13.877337] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:13.877341] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:13.886582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:13.897174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:13.897201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:13.897210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:13.897215] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:13.897219] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:13.906270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:13.916891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:13.916918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:13.916928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:13.916933] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:13.916940] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:13.926538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:13.937496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:13.937524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:13.937533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:13.937538] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:13.937542] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:13.946873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:13.957385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:13.957417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:13.957426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:13.957431] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:13.957435] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:13.966518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:13.977527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:13.977552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:13.977561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:13.977566] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:13.977570] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:13.986662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:13.997190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:13.997216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:13.997225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:13.997238] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:13.997243] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:14.006844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:14.017660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:14.017698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:14.017707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:14.017712] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:14.017716] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:14.026804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:14.037709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:14.037740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:14.037751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:14.037755] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:14.037760] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:14.047320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:14.057635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.883 [2024-07-15 10:35:14.057665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.883 [2024-07-15 10:35:14.057674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.883 [2024-07-15 10:35:14.057679] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.883 [2024-07-15 10:35:14.057683] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:36.883 [2024-07-15 10:35:14.067288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:36.883 qpair failed and we were unable to recover it. 00:27:36.883 [2024-07-15 10:35:14.077422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:36.884 [2024-07-15 10:35:14.077449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:36.884 [2024-07-15 10:35:14.077458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:36.884 [2024-07-15 10:35:14.077463] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:36.884 [2024-07-15 10:35:14.077467] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.145 [2024-07-15 10:35:14.087039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.145 qpair failed and we were unable to recover it. 00:27:37.145 [2024-07-15 10:35:14.097882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.145 [2024-07-15 10:35:14.097912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.145 [2024-07-15 10:35:14.097921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.145 [2024-07-15 10:35:14.097929] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.145 [2024-07-15 10:35:14.097933] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.145 [2024-07-15 10:35:14.106904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.117990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.118019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.118028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.118032] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.118037] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.146 [2024-07-15 10:35:14.127280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.137955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.137984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.137994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.137999] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.138003] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.146 [2024-07-15 10:35:14.147288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.157834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.157860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.157869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.157874] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.157878] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.146 [2024-07-15 10:35:14.167501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.177978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.178009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.178018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.178023] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.178027] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.146 [2024-07-15 10:35:14.187446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.198061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.198090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.198100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.198105] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.198109] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.146 [2024-07-15 10:35:14.207679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.218208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.218242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.218252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.218256] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.218261] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.146 [2024-07-15 10:35:14.227446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.237834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.237865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.237875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.237880] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.237884] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.146 [2024-07-15 10:35:14.247505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.257318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.257347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.257357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.257362] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.257366] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.146 [2024-07-15 10:35:14.267223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.278405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.278437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.278449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.278454] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.278458] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.146 [2024-07-15 10:35:14.287723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.298400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.298432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.298442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.298447] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.298451] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.146 [2024-07-15 10:35:14.307685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.318013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.318044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.318053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.318058] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.318063] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.146 [2024-07-15 10:35:14.327673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.146 qpair failed and we were unable to recover it. 00:27:37.146 [2024-07-15 10:35:14.337458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.146 [2024-07-15 10:35:14.337487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.146 [2024-07-15 10:35:14.337497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.146 [2024-07-15 10:35:14.337502] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.146 [2024-07-15 10:35:14.337506] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.408 [2024-07-15 10:35:14.347605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.408 qpair failed and we were unable to recover it. 00:27:37.408 [2024-07-15 10:35:14.358447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.408 [2024-07-15 10:35:14.358471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.408 [2024-07-15 10:35:14.358481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.408 [2024-07-15 10:35:14.358486] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.408 [2024-07-15 10:35:14.358493] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.408 [2024-07-15 10:35:14.367897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.408 qpair failed and we were unable to recover it. 00:27:37.408 [2024-07-15 10:35:14.378646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.408 [2024-07-15 10:35:14.378673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.408 [2024-07-15 10:35:14.378682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.408 [2024-07-15 10:35:14.378687] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.408 [2024-07-15 10:35:14.378691] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.408 [2024-07-15 10:35:14.387989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.408 qpair failed and we were unable to recover it. 00:27:37.409 [2024-07-15 10:35:14.398188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.409 [2024-07-15 10:35:14.398216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.409 [2024-07-15 10:35:14.398226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.409 [2024-07-15 10:35:14.398235] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.409 [2024-07-15 10:35:14.398240] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.409 [2024-07-15 10:35:14.408286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.409 qpair failed and we were unable to recover it. 00:27:37.409 [2024-07-15 10:35:14.418617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.409 [2024-07-15 10:35:14.418649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.409 [2024-07-15 10:35:14.418659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.409 [2024-07-15 10:35:14.418663] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.409 [2024-07-15 10:35:14.418668] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.409 [2024-07-15 10:35:14.428120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.409 qpair failed and we were unable to recover it. 00:27:37.409 [2024-07-15 10:35:14.438005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.409 [2024-07-15 10:35:14.438035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.409 [2024-07-15 10:35:14.438044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.409 [2024-07-15 10:35:14.438049] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.409 [2024-07-15 10:35:14.438053] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.409 [2024-07-15 10:35:14.448160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.409 qpair failed and we were unable to recover it. 00:27:37.409 [2024-07-15 10:35:14.458794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.409 [2024-07-15 10:35:14.458825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.409 [2024-07-15 10:35:14.458835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.409 [2024-07-15 10:35:14.458840] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.409 [2024-07-15 10:35:14.458844] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.409 [2024-07-15 10:35:14.468211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.409 qpair failed and we were unable to recover it. 00:27:37.409 [2024-07-15 10:35:14.478445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.409 [2024-07-15 10:35:14.478471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.409 [2024-07-15 10:35:14.478480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.409 [2024-07-15 10:35:14.478485] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.409 [2024-07-15 10:35:14.478489] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.409 [2024-07-15 10:35:14.488449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.409 qpair failed and we were unable to recover it. 00:27:37.409 [2024-07-15 10:35:14.498269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.409 [2024-07-15 10:35:14.498299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.409 [2024-07-15 10:35:14.498308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.409 [2024-07-15 10:35:14.498313] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.409 [2024-07-15 10:35:14.498317] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.409 [2024-07-15 10:35:14.508249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.409 qpair failed and we were unable to recover it. 00:27:37.409 [2024-07-15 10:35:14.518951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.409 [2024-07-15 10:35:14.518981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.409 [2024-07-15 10:35:14.518991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.409 [2024-07-15 10:35:14.518995] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.409 [2024-07-15 10:35:14.518999] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.409 [2024-07-15 10:35:14.528181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.409 qpair failed and we were unable to recover it. 00:27:37.409 [2024-07-15 10:35:14.539096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.409 [2024-07-15 10:35:14.539121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.409 [2024-07-15 10:35:14.539131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.409 [2024-07-15 10:35:14.539138] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.409 [2024-07-15 10:35:14.539142] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.409 [2024-07-15 10:35:14.548453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.409 qpair failed and we were unable to recover it. 00:27:37.409 [2024-07-15 10:35:14.558093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.409 [2024-07-15 10:35:14.558118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.409 [2024-07-15 10:35:14.558127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.409 [2024-07-15 10:35:14.558132] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.409 [2024-07-15 10:35:14.558136] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.409 [2024-07-15 10:35:14.568461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.409 qpair failed and we were unable to recover it. 00:27:37.409 [2024-07-15 10:35:14.579009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.409 [2024-07-15 10:35:14.579040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.409 [2024-07-15 10:35:14.579050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.409 [2024-07-15 10:35:14.579054] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.409 [2024-07-15 10:35:14.579059] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.409 [2024-07-15 10:35:14.588448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.409 qpair failed and we were unable to recover it. 00:27:37.409 [2024-07-15 10:35:14.599218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.409 [2024-07-15 10:35:14.599250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.409 [2024-07-15 10:35:14.599260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.410 [2024-07-15 10:35:14.599264] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.410 [2024-07-15 10:35:14.599269] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.671 [2024-07-15 10:35:14.608686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-07-15 10:35:14.618981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.671 [2024-07-15 10:35:14.619005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.671 [2024-07-15 10:35:14.619015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.671 [2024-07-15 10:35:14.619019] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.671 [2024-07-15 10:35:14.619024] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.671 [2024-07-15 10:35:14.628412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.671 qpair failed and we were unable to recover it. 00:27:37.671 [2024-07-15 10:35:14.638655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:37.671 [2024-07-15 10:35:14.638682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:37.671 [2024-07-15 10:35:14.638692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:37.671 [2024-07-15 10:35:14.638697] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:37.671 [2024-07-15 10:35:14.638701] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:37.671 [2024-07-15 10:35:14.648409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:37.671 qpair failed and we were unable to recover it. 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Write completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Write completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Write completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Write completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Write completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Write completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Write completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Write completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Write completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Read completed with error (sct=0, sc=8) 00:27:38.613 starting I/O failed 00:27:38.613 Write completed with error (sct=0, sc=8) 00:27:38.614 starting I/O failed 00:27:38.614 Read completed with error (sct=0, sc=8) 00:27:38.614 starting I/O failed 00:27:38.614 Write completed with error (sct=0, sc=8) 00:27:38.614 starting I/O failed 00:27:38.614 Write completed with error (sct=0, sc=8) 00:27:38.614 starting I/O failed 00:27:38.614 Write completed with error (sct=0, sc=8) 00:27:38.614 starting I/O failed 00:27:38.614 Write completed with error (sct=0, sc=8) 00:27:38.614 starting I/O failed 00:27:38.614 Read completed with error (sct=0, sc=8) 00:27:38.614 starting I/O failed 00:27:38.614 [2024-07-15 10:35:15.654013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.614 [2024-07-15 10:35:15.661776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.614 [2024-07-15 10:35:15.661817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.614 [2024-07-15 10:35:15.661836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.614 [2024-07-15 10:35:15.661844] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.614 [2024-07-15 10:35:15.661851] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:38.614 [2024-07-15 10:35:15.671468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.614 qpair failed and we were unable to recover it. 00:27:38.614 [2024-07-15 10:35:15.682111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:38.614 [2024-07-15 10:35:15.682139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:38.614 [2024-07-15 10:35:15.682154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:38.614 [2024-07-15 10:35:15.682161] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:38.614 [2024-07-15 10:35:15.682168] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:27:38.614 [2024-07-15 10:35:15.691859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:38.614 qpair failed and we were unable to recover it. 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Read completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 Write completed with error (sct=0, sc=8) 00:27:39.592 starting I/O failed 00:27:39.592 [2024-07-15 10:35:16.698041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.592 [2024-07-15 10:35:16.704974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.592 [2024-07-15 10:35:16.705014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.592 [2024-07-15 10:35:16.705032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.593 [2024-07-15 10:35:16.705039] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.593 [2024-07-15 10:35:16.705046] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b9980 00:27:39.593 [2024-07-15 10:35:16.714424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.593 qpair failed and we were unable to recover it. 00:27:39.593 [2024-07-15 10:35:16.724802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:39.593 [2024-07-15 10:35:16.724834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:39.593 [2024-07-15 10:35:16.724851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:39.593 [2024-07-15 10:35:16.724859] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:39.593 [2024-07-15 10:35:16.724865] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002b9980 00:27:39.593 [2024-07-15 10:35:16.734667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:39.593 qpair failed and we were unable to recover it. 00:27:39.593 [2024-07-15 10:35:16.734833] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:39.593 A controller has encountered a failure and is being reset. 00:27:39.593 [2024-07-15 10:35:16.734951] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:39.593 [2024-07-15 10:35:16.772971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:39.860 Controller properly reset. 00:27:39.860 Initializing NVMe Controllers 00:27:39.860 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:39.860 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:39.860 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:39.860 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:39.860 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:39.860 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:39.860 Initialization complete. Launching workers. 00:27:39.860 Starting thread on core 1 00:27:39.860 Starting thread on core 2 00:27:39.860 Starting thread on core 3 00:27:39.860 Starting thread on core 0 00:27:39.860 10:35:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:39.861 00:27:39.861 real 0m13.629s 00:27:39.861 user 0m28.754s 00:27:39.861 sys 0m2.113s 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:39.861 ************************************ 00:27:39.861 END TEST nvmf_target_disconnect_tc2 00:27:39.861 ************************************ 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:39.861 ************************************ 00:27:39.861 START TEST nvmf_target_disconnect_tc3 00:27:39.861 ************************************ 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc3 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3096742 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:27:39.861 10:35:16 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:27:39.861 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.773 10:35:18 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3095030 00:27:41.773 10:35:18 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Read completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Read completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Read completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Read completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Read completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Read completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Read completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Read completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 Write completed with error (sct=0, sc=8) 00:27:43.155 starting I/O failed 00:27:43.155 [2024-07-15 10:35:20.124254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:43.155 [2024-07-15 10:35:20.127085] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:43.155 [2024-07-15 10:35:20.127098] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:43.155 [2024-07-15 10:35:20.127103] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.097 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3095030 Killed "${NVMF_APP[@]}" "$@" 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3097422 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3097422 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3097422 ']' 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:44.097 10:35:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.097 [2024-07-15 10:35:20.995020] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:44.097 [2024-07-15 10:35:20.995073] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.097 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.097 [2024-07-15 10:35:21.079733] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:44.097 [2024-07-15 10:35:21.131541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:44.097 qpair failed and we were unable to recover it. 00:27:44.097 [2024-07-15 10:35:21.133562] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.097 [2024-07-15 10:35:21.133587] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.097 [2024-07-15 10:35:21.133593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.097 [2024-07-15 10:35:21.133598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.097 [2024-07-15 10:35:21.133602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.097 [2024-07-15 10:35:21.133789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:44.097 [2024-07-15 10:35:21.133944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:44.097 [2024-07-15 10:35:21.134097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:44.097 [2024-07-15 10:35:21.134154] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:44.097 [2024-07-15 10:35:21.134164] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:44.097 [2024-07-15 10:35:21.134170] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:44.097 [2024-07-15 10:35:21.134099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.667 Malloc0 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.667 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.667 [2024-07-15 10:35:21.862644] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb21550/0xb2d0b0) succeed. 00:27:44.927 [2024-07-15 10:35:21.874755] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb22b90/0xb6e740) succeed. 00:27:44.927 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.927 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:44.927 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.927 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.927 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.927 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:44.927 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.927 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.927 10:35:21 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.927 10:35:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:27:44.927 10:35:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.927 10:35:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.927 [2024-07-15 10:35:22.007715] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:27:44.927 10:35:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.927 10:35:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:27:44.927 10:35:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.927 10:35:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:44.927 10:35:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.927 10:35:22 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3096742 00:27:45.186 [2024-07-15 10:35:22.138603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:45.186 qpair failed and we were unable to recover it. 00:27:45.186 [2024-07-15 10:35:22.141222] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:45.186 [2024-07-15 10:35:22.141238] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:45.186 [2024-07-15 10:35:22.141243] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:46.131 [2024-07-15 10:35:23.145568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:46.131 qpair failed and we were unable to recover it. 00:27:46.131 [2024-07-15 10:35:23.148115] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:46.131 [2024-07-15 10:35:23.148126] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:46.131 [2024-07-15 10:35:23.148132] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:47.070 [2024-07-15 10:35:24.152479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:47.070 qpair failed and we were unable to recover it. 00:27:47.070 [2024-07-15 10:35:24.154525] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:47.070 [2024-07-15 10:35:24.154534] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:47.070 [2024-07-15 10:35:24.154538] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:48.010 [2024-07-15 10:35:25.158794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:48.010 qpair failed and we were unable to recover it. 00:27:48.010 [2024-07-15 10:35:25.161224] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:48.010 [2024-07-15 10:35:25.161238] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:48.010 [2024-07-15 10:35:25.161244] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:49.391 [2024-07-15 10:35:26.165539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:49.391 qpair failed and we were unable to recover it. 00:27:49.391 [2024-07-15 10:35:26.168124] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:49.392 [2024-07-15 10:35:26.168134] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:49.392 [2024-07-15 10:35:26.168139] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:50.334 [2024-07-15 10:35:27.172566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:50.334 qpair failed and we were unable to recover it. 00:27:50.334 [2024-07-15 10:35:27.175109] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:50.334 [2024-07-15 10:35:27.175119] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:50.334 [2024-07-15 10:35:27.175124] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:27:51.275 [2024-07-15 10:35:28.179456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:51.275 qpair failed and we were unable to recover it. 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Read completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 Write completed with error (sct=0, sc=8) 00:27:52.214 starting I/O failed 00:27:52.214 [2024-07-15 10:35:29.185213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:52.214 [2024-07-15 10:35:29.187461] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:52.214 [2024-07-15 10:35:29.187473] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:52.214 [2024-07-15 10:35:29.187478] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:53.155 [2024-07-15 10:35:30.191863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:53.155 qpair failed and we were unable to recover it. 00:27:53.155 [2024-07-15 10:35:30.194220] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:53.155 [2024-07-15 10:35:30.194235] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:53.155 [2024-07-15 10:35:30.194239] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:27:54.098 [2024-07-15 10:35:31.198545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:54.098 qpair failed and we were unable to recover it. 00:27:54.098 [2024-07-15 10:35:31.198714] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:54.098 A controller has encountered a failure and is being reset. 00:27:54.098 Resorting to new failover address 192.168.100.9 00:27:54.098 [2024-07-15 10:35:31.198820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:54.098 [2024-07-15 10:35:31.198878] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:27:54.098 [2024-07-15 10:35:31.201495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:54.098 Controller properly reset. 00:27:55.481 Write completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Write completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Write completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Read completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Read completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Write completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Read completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Write completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Read completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Read completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Write completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Write completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Read completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Read completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Read completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Read completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Write completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Read completed with error (sct=0, sc=8) 00:27:55.481 starting I/O failed 00:27:55.481 Read completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Read completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Write completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Write completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Write completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Read completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Read completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Read completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Write completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Write completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Read completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Read completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Write completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 Read completed with error (sct=0, sc=8) 00:27:55.482 starting I/O failed 00:27:55.482 [2024-07-15 10:35:32.242973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Read completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 Write completed with error (sct=0, sc=8) 00:27:56.422 starting I/O failed 00:27:56.422 [2024-07-15 10:35:33.280004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:56.422 Initializing NVMe Controllers 00:27:56.422 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:56.422 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:56.422 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:56.422 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:56.422 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:56.422 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:56.422 Initialization complete. Launching workers. 00:27:56.422 Starting thread on core 1 00:27:56.422 Starting thread on core 2 00:27:56.422 Starting thread on core 3 00:27:56.422 Starting thread on core 0 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:27:56.422 00:27:56.422 real 0m16.408s 00:27:56.422 user 0m59.529s 00:27:56.422 sys 0m3.167s 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:56.422 ************************************ 00:27:56.422 END TEST nvmf_target_disconnect_tc3 00:27:56.422 ************************************ 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:56.422 rmmod nvme_rdma 00:27:56.422 rmmod nvme_fabrics 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3097422 ']' 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3097422 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3097422 ']' 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3097422 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3097422 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3097422' 00:27:56.422 killing process with pid 3097422 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3097422 00:27:56.422 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3097422 00:27:56.683 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:56.683 10:35:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:56.683 00:27:56.683 real 0m39.943s 00:27:56.683 user 2m21.530s 00:27:56.683 sys 0m11.794s 00:27:56.683 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:56.683 10:35:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:56.683 ************************************ 00:27:56.683 END TEST nvmf_target_disconnect 00:27:56.683 ************************************ 00:27:56.683 10:35:33 nvmf_rdma -- common/autotest_common.sh@1142 -- # return 0 00:27:56.683 10:35:33 nvmf_rdma -- nvmf/nvmf.sh@126 -- # timing_exit host 00:27:56.683 10:35:33 nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:56.683 10:35:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:56.683 10:35:33 nvmf_rdma -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:27:56.683 00:27:56.683 real 19m43.039s 00:27:56.683 user 46m38.758s 00:27:56.683 sys 5m34.434s 00:27:56.683 10:35:33 nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:56.683 10:35:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:56.683 ************************************ 00:27:56.683 END TEST nvmf_rdma 00:27:56.683 ************************************ 00:27:56.683 10:35:33 -- common/autotest_common.sh@1142 -- # return 0 00:27:56.683 10:35:33 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:56.683 10:35:33 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:56.683 10:35:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:56.683 10:35:33 -- common/autotest_common.sh@10 -- # set +x 00:27:56.683 ************************************ 00:27:56.683 START TEST spdkcli_nvmf_rdma 00:27:56.683 ************************************ 00:27:56.683 10:35:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:27:56.943 * Looking for test storage... 00:27:56.943 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.943 10:35:33 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3100143 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3100143 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@829 -- # '[' -z 3100143 ']' 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:56.944 10:35:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:56.944 [2024-07-15 10:35:34.047398] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:27:56.944 [2024-07-15 10:35:34.047456] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100143 ] 00:27:56.944 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.944 [2024-07-15 10:35:34.112551] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:57.204 [2024-07-15 10:35:34.179378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.204 [2024-07-15 10:35:34.179466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@862 -- # return 0 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:27:57.775 10:35:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.0 (0x15b3 - 0x1015)' 00:28:05.910 Found 0000:98:00.0 (0x15b3 - 0x1015) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:98:00.1 (0x15b3 - 0x1015)' 00:28:05.910 Found 0000:98:00.1 (0x15b3 - 0x1015) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.0: mlx_0_0' 00:28:05.910 Found net devices under 0000:98:00.0: mlx_0_0 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:05.910 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:98:00.1: mlx_0_1' 00:28:05.911 Found net devices under 0000:98:00.1: mlx_0_1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:28:05.911 4: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:05.911 link/ether ec:0d:9a:8b:2e:0c brd ff:ff:ff:ff:ff:ff 00:28:05.911 altname enp152s0f0np0 00:28:05.911 altname ens817f0np0 00:28:05.911 inet 192.168.100.8/24 scope global mlx_0_0 00:28:05.911 valid_lft forever preferred_lft forever 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:28:05.911 5: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:05.911 link/ether ec:0d:9a:8b:2e:0d brd ff:ff:ff:ff:ff:ff 00:28:05.911 altname enp152s0f1np1 00:28:05.911 altname ens817f1np1 00:28:05.911 inet 192.168.100.9/24 scope global mlx_0_1 00:28:05.911 valid_lft forever preferred_lft forever 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:05.911 192.168.100.9' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:05.911 192.168.100.9' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:05.911 192.168.100.9' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:05.911 10:35:42 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:05.911 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:05.911 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:05.911 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:05.911 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:05.911 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:05.911 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:05.912 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:05.912 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:05.912 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:05.912 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:05.912 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:05.912 ' 00:28:07.822 [2024-07-15 10:35:44.950770] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b68ef0/0x19efe40) succeed. 00:28:07.822 [2024-07-15 10:35:44.965353] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b6a3a0/0x1adaec0) succeed. 00:28:09.202 [2024-07-15 10:35:46.183708] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:28:11.746 [2024-07-15 10:35:48.322189] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:28:13.129 [2024-07-15 10:35:50.160089] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:28:14.510 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:28:14.510 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:28:14.510 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:28:14.510 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:28:14.510 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:28:14.510 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:28:14.510 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:28:14.510 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:14.510 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:14.510 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:28:14.510 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:28:14.511 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:28:14.511 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:28:14.511 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:28:14.511 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:28:14.511 10:35:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:28:14.511 10:35:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:14.511 10:35:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:14.770 10:35:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:28:14.770 10:35:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:14.770 10:35:51 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:14.770 10:35:51 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:28:14.770 10:35:51 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:28:15.030 10:35:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:28:15.030 10:35:52 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:28:15.030 10:35:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:28:15.030 10:35:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.030 10:35:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:15.030 10:35:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:28:15.030 10:35:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:15.030 10:35:52 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:15.030 10:35:52 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:28:15.030 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:28:15.030 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:15.030 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:28:15.030 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:28:15.030 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:28:15.030 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:28:15.030 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:28:15.030 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:28:15.030 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:28:15.030 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:28:15.030 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:28:15.030 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:28:15.030 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:28:15.030 ' 00:28:20.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:28:20.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:28:20.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:20.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:28:20.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:28:20.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:28:20.395 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:28:20.395 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:28:20.395 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:28:20.395 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:28:20.395 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:28:20.395 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:28:20.395 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:28:20.395 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3100143 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@948 -- # '[' -z 3100143 ']' 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # kill -0 3100143 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # uname 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3100143 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3100143' 00:28:20.395 killing process with pid 3100143 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@967 -- # kill 3100143 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # wait 3100143 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:20.395 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:20.396 rmmod nvme_rdma 00:28:20.396 rmmod nvme_fabrics 00:28:20.396 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:20.396 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:28:20.396 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:28:20.396 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:20.396 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:20.396 10:35:57 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:20.396 00:28:20.396 real 0m23.516s 00:28:20.396 user 0m50.019s 00:28:20.396 sys 0m6.355s 00:28:20.396 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:20.396 10:35:57 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:20.396 ************************************ 00:28:20.396 END TEST spdkcli_nvmf_rdma 00:28:20.396 ************************************ 00:28:20.396 10:35:57 -- common/autotest_common.sh@1142 -- # return 0 00:28:20.396 10:35:57 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:20.396 10:35:57 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:20.396 10:35:57 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:20.396 10:35:57 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:20.396 10:35:57 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:20.396 10:35:57 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:20.396 10:35:57 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:20.396 10:35:57 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:20.396 10:35:57 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:20.396 10:35:57 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:20.396 10:35:57 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:20.396 10:35:57 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:20.396 10:35:57 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:20.396 10:35:57 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:20.396 10:35:57 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:20.396 10:35:57 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:20.396 10:35:57 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:20.396 10:35:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:20.396 10:35:57 -- common/autotest_common.sh@10 -- # set +x 00:28:20.396 10:35:57 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:20.396 10:35:57 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:20.396 10:35:57 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:20.396 10:35:57 -- common/autotest_common.sh@10 -- # set +x 00:28:28.589 INFO: APP EXITING 00:28:28.589 INFO: killing all VMs 00:28:28.589 INFO: killing vhost app 00:28:28.589 INFO: EXIT DONE 00:28:31.133 Waiting for block devices as requested 00:28:31.393 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:31.393 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:31.393 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:31.654 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:31.654 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:31.654 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:31.915 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:31.915 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:31.915 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:32.176 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:32.176 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:32.176 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:32.436 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:32.436 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:32.436 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:32.696 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:32.696 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:36.900 Cleaning 00:28:36.900 Removing: /var/run/dpdk/spdk0/config 00:28:36.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:36.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:36.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:36.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:36.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:36.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:36.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:36.900 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:36.900 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:36.900 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:36.900 Removing: /var/run/dpdk/spdk1/config 00:28:36.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:36.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:36.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:36.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:36.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:36.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:36.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:36.900 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:36.900 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:36.900 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:36.900 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:36.900 Removing: /var/run/dpdk/spdk2/config 00:28:36.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:36.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:36.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:36.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:36.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:36.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:36.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:36.900 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:36.900 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:36.900 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:36.900 Removing: /var/run/dpdk/spdk3/config 00:28:36.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:36.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:36.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:36.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:36.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:36.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:36.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:36.900 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:36.900 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:36.900 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:36.900 Removing: /var/run/dpdk/spdk4/config 00:28:36.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:36.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:36.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:36.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:36.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:36.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:36.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:36.900 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:36.900 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:36.900 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:36.900 Removing: /dev/shm/bdevperf_trace.pid2859601 00:28:36.900 Removing: /dev/shm/bdevperf_trace.pid2994715 00:28:36.900 Removing: /dev/shm/bdev_svc_trace.1 00:28:36.900 Removing: /dev/shm/nvmf_trace.0 00:28:36.900 Removing: /dev/shm/spdk_tgt_trace.pid2722434 00:28:36.900 Removing: /var/run/dpdk/spdk0 00:28:36.900 Removing: /var/run/dpdk/spdk1 00:28:36.900 Removing: /var/run/dpdk/spdk2 00:28:36.900 Removing: /var/run/dpdk/spdk3 00:28:36.900 Removing: /var/run/dpdk/spdk4 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2720907 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2722434 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2722979 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2724185 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2724330 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2725402 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2725727 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2725869 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2731249 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2731748 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2732094 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2732485 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2732888 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2733279 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2733525 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2733688 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2734051 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2735308 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2738685 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2739072 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2739357 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2739541 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2739917 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2740246 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2740621 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2740777 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2741342 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2741762 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2741916 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2742156 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2742593 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2742941 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2743328 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2743512 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2743730 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2743796 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2744147 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2744471 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2744663 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2744898 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2745248 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2745603 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2745950 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2746148 00:28:36.900 Removing: /var/run/dpdk/spdk_pid2746354 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2746694 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2747043 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2747398 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2747637 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2747826 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2748153 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2748504 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2748860 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2749127 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2749320 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2749604 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2749878 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2750209 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2755224 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2810206 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2815500 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2827587 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2834271 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2838916 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2839731 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2848240 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2859601 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2859955 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2865019 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2872394 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2875364 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2887746 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2919429 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2923963 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2992377 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2993520 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2994715 00:28:37.160 Removing: /var/run/dpdk/spdk_pid2999970 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3009687 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3010805 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3011806 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3012813 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3013290 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3018848 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3018850 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3024182 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3024811 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3025474 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3026247 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3026314 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3032304 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3033127 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3038470 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3041662 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3048512 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3060962 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3060989 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3086260 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3086538 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3093735 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3094348 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3096742 00:28:37.160 Removing: /var/run/dpdk/spdk_pid3100143 00:28:37.160 Clean 00:28:37.420 10:36:14 -- common/autotest_common.sh@1451 -- # return 0 00:28:37.420 10:36:14 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:37.420 10:36:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:37.420 10:36:14 -- common/autotest_common.sh@10 -- # set +x 00:28:37.420 10:36:14 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:37.420 10:36:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:37.420 10:36:14 -- common/autotest_common.sh@10 -- # set +x 00:28:37.420 10:36:14 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:28:37.420 10:36:14 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:28:37.420 10:36:14 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:28:37.420 10:36:14 -- spdk/autotest.sh@391 -- # hash lcov 00:28:37.420 10:36:14 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:37.420 10:36:14 -- spdk/autotest.sh@393 -- # hostname 00:28:37.420 10:36:14 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:28:37.680 geninfo: WARNING: invalid characters removed from testname! 00:29:04.278 10:36:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:04.278 10:36:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:04.539 10:36:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:06.447 10:36:43 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:07.828 10:36:44 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:09.209 10:36:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:29:10.591 10:36:47 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:10.591 10:36:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:10.591 10:36:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:10.591 10:36:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.591 10:36:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.591 10:36:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.591 10:36:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.591 10:36:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.591 10:36:47 -- paths/export.sh@5 -- $ export PATH 00:29:10.591 10:36:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.591 10:36:47 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:29:10.851 10:36:47 -- common/autobuild_common.sh@444 -- $ date +%s 00:29:10.851 10:36:47 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721032607.XXXXXX 00:29:10.851 10:36:47 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721032607.InFZWe 00:29:10.851 10:36:47 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:29:10.851 10:36:47 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:29:10.851 10:36:47 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:29:10.851 10:36:47 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:10.851 10:36:47 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:10.851 10:36:47 -- common/autobuild_common.sh@460 -- $ get_config_params 00:29:10.851 10:36:47 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:10.851 10:36:47 -- common/autotest_common.sh@10 -- $ set +x 00:29:10.851 10:36:47 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:29:10.851 10:36:47 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:29:10.851 10:36:47 -- pm/common@17 -- $ local monitor 00:29:10.851 10:36:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:10.851 10:36:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:10.851 10:36:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:10.851 10:36:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:10.851 10:36:47 -- pm/common@21 -- $ date +%s 00:29:10.851 10:36:47 -- pm/common@25 -- $ sleep 1 00:29:10.851 10:36:47 -- pm/common@21 -- $ date +%s 00:29:10.851 10:36:47 -- pm/common@21 -- $ date +%s 00:29:10.851 10:36:47 -- pm/common@21 -- $ date +%s 00:29:10.851 10:36:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032607 00:29:10.851 10:36:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032607 00:29:10.851 10:36:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032607 00:29:10.851 10:36:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721032607 00:29:10.851 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032607_collect-vmstat.pm.log 00:29:10.851 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032607_collect-cpu-load.pm.log 00:29:10.851 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032607_collect-cpu-temp.pm.log 00:29:10.851 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721032607_collect-bmc-pm.bmc.pm.log 00:29:11.791 10:36:48 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:29:11.791 10:36:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:29:11.791 10:36:48 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:11.791 10:36:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:11.791 10:36:48 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:11.791 10:36:48 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:11.791 10:36:48 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:11.791 10:36:48 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:11.791 10:36:48 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:29:11.791 10:36:48 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:11.791 10:36:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:11.791 10:36:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:11.791 10:36:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:11.791 10:36:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.791 10:36:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:11.791 10:36:48 -- pm/common@44 -- $ pid=3119310 00:29:11.791 10:36:48 -- pm/common@50 -- $ kill -TERM 3119310 00:29:11.791 10:36:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.791 10:36:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:11.791 10:36:48 -- pm/common@44 -- $ pid=3119311 00:29:11.791 10:36:48 -- pm/common@50 -- $ kill -TERM 3119311 00:29:11.791 10:36:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.791 10:36:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:11.791 10:36:48 -- pm/common@44 -- $ pid=3119313 00:29:11.791 10:36:48 -- pm/common@50 -- $ kill -TERM 3119313 00:29:11.791 10:36:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:11.791 10:36:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:11.791 10:36:48 -- pm/common@44 -- $ pid=3119336 00:29:11.791 10:36:48 -- pm/common@50 -- $ sudo -E kill -TERM 3119336 00:29:11.791 + [[ -n 2596953 ]] 00:29:11.791 + sudo kill 2596953 00:29:11.801 [Pipeline] } 00:29:11.819 [Pipeline] // stage 00:29:11.825 [Pipeline] } 00:29:11.843 [Pipeline] // timeout 00:29:11.849 [Pipeline] } 00:29:11.867 [Pipeline] // catchError 00:29:11.872 [Pipeline] } 00:29:11.890 [Pipeline] // wrap 00:29:11.896 [Pipeline] } 00:29:11.916 [Pipeline] // catchError 00:29:11.926 [Pipeline] stage 00:29:11.928 [Pipeline] { (Epilogue) 00:29:11.944 [Pipeline] catchError 00:29:11.946 [Pipeline] { 00:29:11.962 [Pipeline] echo 00:29:11.964 Cleanup processes 00:29:11.971 [Pipeline] sh 00:29:12.258 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:12.258 3119415 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:29:12.258 3119857 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:12.274 [Pipeline] sh 00:29:12.561 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:29:12.561 ++ grep -v 'sudo pgrep' 00:29:12.561 ++ awk '{print $1}' 00:29:12.561 + sudo kill -9 3119415 00:29:12.575 [Pipeline] sh 00:29:12.860 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:22.927 [Pipeline] sh 00:29:23.213 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:23.213 Artifacts sizes are good 00:29:23.227 [Pipeline] archiveArtifacts 00:29:23.234 Archiving artifacts 00:29:23.391 [Pipeline] sh 00:29:23.676 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:29:23.691 [Pipeline] cleanWs 00:29:23.702 [WS-CLEANUP] Deleting project workspace... 00:29:23.702 [WS-CLEANUP] Deferred wipeout is used... 00:29:23.709 [WS-CLEANUP] done 00:29:23.712 [Pipeline] } 00:29:23.735 [Pipeline] // catchError 00:29:23.750 [Pipeline] sh 00:29:24.041 + logger -p user.info -t JENKINS-CI 00:29:24.052 [Pipeline] } 00:29:24.071 [Pipeline] // stage 00:29:24.078 [Pipeline] } 00:29:24.097 [Pipeline] // node 00:29:24.103 [Pipeline] End of Pipeline 00:29:24.140 Finished: SUCCESS